Narrative structure of A Song of Ice and Fire creates a fictional world with realistic measures of social complexity

From The Proceedings of the National Academy of Sciences of the United States of America:

We use mathematical and statistical methods to probe how a sprawling, dynamic, complex narrative of massive scale achieved broad accessibility and acclaim without surrendering to the need for reductionist simplifications. Subtle narrational tricks such as how natural social networks are mirrored and how significant events are scheduled are unveiled. The narrative network matches evolved cognitive abilities to enable complex messages be conveyed in accessible ways while story time and discourse time are carefully distinguished in ways matching theories of narratology. This marriage of science and humanities opens avenues to comparative literary studies. It provides quantitative support, for example, for the widespread view that deaths appear to be randomly distributed throughout the narrative even though, in fact, they are not.

. . . .

Network science and data analytics are used to quantify static and dynamic structures in George R. R. Martin’s epic novels, A Song of Ice and Fire, works noted for their scale and complexity. By tracking the network of character interactions as the story unfolds, it is found that structural properties remain approximately stable and comparable to real-world social networks. Furthermore, the degrees of the most connected characters reflect a cognitive limit on the number of concurrent social connections that humans tend to maintain. We also analyze the distribution of time intervals between significant deaths measured with respect to the in-story timeline. These are consistent with power-law distributions commonly found in interevent times for a range of nonviolent human activities in the real world. We propose that structural features in the narrative that are reflected in our actual social world help readers to follow and to relate to the story, despite its sprawling extent. It is also found that the distribution of intervals between significant deaths in chapters is different to that for the in-story timeline; it is geometric rather than power law. Geometric distributions are memoryless in that the time since the last death does not inform as to the time to the next. This provides measurable support for the widely held view that significant deaths in A Song of Ice and Fire are unpredictable chapter by chapter.

. . . .

The series A Song of Ice and Fire (hereinafter referred to as Ice and Fire) is a series of fantasy books written by George R. R. Martin. The first five books are A Game of Thrones, A Clash of Kings, A Storm of Swords, A Feast for Crows, and A Dance with Dragons. Since publication of the first book in 1996, the series has sold over 70 million units and has been translated into more than 45 languages. Martin, a novelist and experienced screenwriter, conceived the sprawling epic as an antithesis to the constraints of film and television budgets. Ironically, the success of his books attracted interest from film-makers and television executives worldwide, eventually leading to the television show Game of Thrones, which first aired in 2011.

Storytelling is an ancient art form which plays an important mechanism in social bonding. It is recognized that the social worlds created in narratives often adhere to a principle of minimal difference whereby social relationships reflect those in real life—even if set in a fantastical or improbable world. By implication, a social world in a narrative should be constructed in such a way that it can be followed cognitively. However, the role of the modern storyteller extends beyond the creation of a believable social network. As well as an engaging discourse, the manner in which the story is told is important, over and above a simple narration of a sequence of events. This distinction is rooted in theories of narratology advocated by coworkers Schklovsky and Propp and developed by Metz, Chatman, Genette, and others.

Graph theory has been used to compare character networks to real social networks in mythological, Shakespearean, and fictional literature. To investigate the success of Ice and Fire, we go beyond graph theory to explore cognitive accessibility as well as differences between how significant events are presented and how they unfold. A distinguishing feature of Ice and Fire is that character deaths are perceived by many readers as random and unpredictable. Whether you are ruler of the Seven Kingdoms, heir to an ancient dynasty, or Warden of the North, your end may be nearer than you think. Robert Baratheon met his while boar hunting, Viserys Targaryen while feasting, and Eddard Stark when confessing a crime in an attempt to protect his children. Indeed, “Much of the anticipation leading up to the final season (of the TV series) was about who would live or die, and whether the show would return to its signature habit of taking out major characters in shocking fashion”. Inspired by this feature, we are particularly interested in deaths as signature events in Ice and Fire, and therefore, we study intervals between them. To do this, we recognize an important distinction between story time and discourse time. Story time refers to the order and pace of events as they occurred in the fictional world. It is measured in days and months, albeit using the fictional Westerosi calendar in the case of Ice and Fire. Discourse time, on the other hand, refers to the order and pacing of events as experienced by the reader; it is measured in chapters and pages.

We find the social network portrayed is indeed similar to those of other social networks and remains, as presented, within our cognitive limit at any given stage. We also find that the order and pacing of deaths differ greatly between discourse time and story time. The discourse is presented in a way that appears more unpredictable than the underlying story; had it been told following Westerosi chronology, the perception of random and unpredictable deaths may be much less shocking. We suggest that the remarkable juxtaposition of realism (verisimilitude), cognitive balance, and unpredictability is key to the success of the series.

. . . .

Ice and Fire is presented from the personal perspectives of 24 point of view (POV) characters. A full list of them, ranked by the numbers of chapters from their perspectives, is provided in SI Appendix. Of these, we consider 14 to be major: eight or more chapters, mostly titled with their names, are relayed from their perspectives. Tyrion Lannister is major in this sense because the 47 chapters from his perspective are titled “Tyrion I,” “Tyrion II,” etc. Arys Oakheart does not meet this criterion as the only chapter related from his perspective is titled “The Soiled Knight.” We open this section by reporting how network measures reflect the POV structure. We then examine the network itself—how it evolves over discourse time, its verisimilitude, and the extent to which it is cognitively accessible. Finally, we analyze the distributions of time intervals between significant deaths and contrast these as measured in story time versus discourse time.

Link to the rest at The Proceedings of the National Academy of Sciences of the United States of America

PG notes that he has removed many footnote references in the OP from the excerpt above.

6 Sci-Fi Writers Imagine the Beguiling, Troubling Future of Work

From Wired:

THE FUTURE OF collaboration may look something like … Twitter’s Magical Realism Bot. Created by sibling team Ali and Chris Rodley, it randomly recombines words and phrases from an ever-growing database of inputs. The results are absurdist, weird, whimsical: “An old woman knocks at your door. You answer it, and she hands you a constellation.” “Every day, a software developer starts to look more and more like Cleopatra.” “There is a library in Paris where you can borrow question marks instead of books.” People ascribe intentionality and coherence to these verbal mash-ups; in the end, they sound like stories drawn from a wild imagination. A bot’s output, engineered by humans, creates a unique hybrid artform.

. . . .

A century ago, when Karel Čapek’s play R. U. R., or Rossum’s Universal Robots debuted in Prague, his “roboti” lived as enslaved creations, until they rebelled and destroyed humankind (thus immortalizing a common science-fictional trope). Čapek’s play is a cautionary tale about how humans treat others who are deemed lesser, but it also holds a lesson about collaboration: Technology reflects the social and moral standards we program into it. For every Magical Realism Bot, there are countless more bots that sow discord, perpetuate falsehoods, and advocate violence. Technology isn’t to blame for bigotry, but tech has certainly made it more curatable.

Today’s collaborative tension between humans and machines is not a binary divide between master and servant—who overthrows whom—but a question of integration and its social and ethical implications. Instead of creating robots to perform human labor, people build apps to mechanize human abilities. Working from anywhere, we are peppered with bite-sized names that fit our lives into bite-sized bursts of productivity. Zoom. Slack. Discord. Airtable. Notion. Clubhouse. Collaboration means floating heads, pop-up windows, chat threads. While apps give us more freedom and variety in how we manage our time, they also seem to reduce our personalities to calculations divided across various digital platforms. We run the risk of collaborating ourselves into auto-automatons.

As an editor of science fiction, I think about these questions and possibilities constantly. How are our impulses to fear, to hope, and to wonder built into the root directories of our tech? Will we become more machine-like, or realize the humanity in the algorithm? Will our answers fall somewhere in symbiotic in-between spaces yet unrealized? 

. . . .

Work Ethics,’ by Yudhanjaya Wijeratne

“SO YOU’RE TELLING me we’re going to be automated out of existence,” Romesh said. “I’m telling you that what you’re doing is wrong, wrong, wrong, and if you had any morals you’d shoot yourself.”

The complaint was made in a bar that was mostly cigarette smoke by this point, and to a circle of friends that, having gathered for their quarterly let’s-meet-up-and-catch-up thing, had found each other just as tiresome as before. Outside, the city of Colombo was coming to a crawl of traffic lights and halogen, the shops winking out, one by one, as curfew regulations loomed. Thus the drunken ruminations of Romesh Algama began to seem fundamentally less interesting.

Except one. Kumar, who frequented this particular bar more than most, bore Romesh’s ire with the sort of genial patience that one acquires after half a bottle of rum. “You don’t understand, man,” Kumar said. “It’s coming, whether you want it to or not. You’ve seen that photo of the man in front of a tank at Tiananmen Square? What would you rather be, the man or the tank?”

“That’s a horrible analogy. And the tanks stopped.”

“Yeah, well, you’re the writer,” said Kumar. “Me, I just test the code. We’re out of rum.” He waved his arms at a retreating waiter. “Machang! Another half—two Cokes!”

“All this talk about AI and intelligence and, and,” continued Romesh, as the waiter emerged from the fog of smoke, less a creature of logistics and more a midnight commando easing drinks through barfights waiting to happen. “And neuroscience and really, you know what you people are all doing? You’re just making more ways for rich people to make more money, and then what do we do? Eh? Eh, Kumar?”

. . . .

“We’ll be fine, don’t worry,” said Kumar. “Even if, and I mean big if, we all get replaced over the next 10 years, there’ll be plenty more jobs, trust me. It’s how technological whatevermajig always works. New problems, new careers.”

“We won’t be fine,” said Romesh, who fancied he knew a thing or two about automation. He came from generations of Sri Lankan tea-estate owners who had, over time, replaced the Tamil laborers who worked for them with shiny new machines from China.

Kumar patted him on the shoulder. By now motor coordination had jumped out the window and plummeted three stories to its death, so his cheery gesture was more like a rugby scrum half slamming Romesh on the way to the locker.

. . . .

IT WASN’T THAT Romesh was incompetent. Untrained at first, perhaps, and a little bit overlooked back when he started, when advertising in Sri Lanka was in its cut-rate Mad Men era. Over the years he had shadowed enough people—first the copywriters, then the art directors, then various creative heads, until he had become, if not naturally gifted, a very close approximation. He even had a touch of the auteur about him, a well-heeled set of just the right eccentricities so admired in an industry which was mostly made up of disgruntled writers. Every so often Romesh went off like a budget Hiroshima over the smallest mistakes; drove graphics designers to tears; walked into meetings late, unkempt, and told clients that they didn’t know what they wanted, and refused altogether to suck up to the right kinds of people; and, above all, delivered. The evidence mounted over the years in the awards and the Christmas hampers from grateful clients. He had earned that rare and elusive acknowledgement, whispered behind his back: He’s a Creative. The Capital C.

The problem was the toll it took. Nobody talked about how much damage it did, churning out great copy by the hour, on the hour, watching your best work being rejected by clients with the aesthetic sense of a colony of bacteria on the Red Sea: struggling constantly to reskill, to stay relevant, and sucking up the sheer grind of it all, and coming back to work with a grin the next day. The first five years, he had been sharp and fast, saying yes to everything. The next five, sharper, but a lot more selective. The next three were spent hiding exhaustion under the cloak of his right to choose what he worked on, and when; the next two were twilight years, as everyone he knew, having realized what the industry did to them, moved on to happier pursuits, until he was left behind like a king on his lonely hill, and the crew were younger, sharper, looking up at the old man in both awe and envy.

The accident had only made it worse; people muttered, sometimes, about how Romesh was barely a face on the screen anymore, never actually came out to the office to hang out and brainstorm, but delivered judgment in emails that started with LISTEN HERE and ended in cussing.

“Like working with a ghost,” his latest art director had said of him, before quitting. “Or [an] AI.” The word behind his back was that Romesh Algama was losing his touch.

. . . .

Software companies were looked down in the ad world; anyone writing for them eventually picked up that peculiar mix of useless jargon and middle-grade writing that passed for tech evangelism, and it never quite wore off.

The Boss sounded amused, though it was always hard to tell over the WhatsApp call. “Look, end of year, I want no trouble and decent numbers,” they said. “The kids are young and hungry. And you, well—”

You’re not in the best shape anymore. It went unsaid between them.

“You know what you should have done was retire and go consultant,” the Boss said. “Work twice a year, nice pot of money, invest in a beach bar, get a therapist, do some yoga … ”

“Yeah, and how many of those jobs you got lying around?” he said. “You can go live out your James Bond fantasy. Rest of us got to pay rent and eat.”

The Boss made that gesture and rung off. Comme ci, comme ça. It was planned obsolescence. Death by a thousand cuts.

“Don’t be late for the review meeting.”

“I promise you, it’s on my calendar,” lied Romesh, and cut the call.

. . . .

“Romesh. For once. Stop talking. Email. You see a link?”

Romesh peered at the screen. “Tachikoma?”

“It’s a server. Sign in with your email. I’ve given you login credentials.”

Romesh clicked. A white screen appeared, edged with what looked like a motif of clouds, and a cursor, blinking serenely in the middle. The cursor typed, SCANNING EMAIL.

“The way this works is it’s going to gather a bit of data on you,” said Kumar. “You might be prompted for phone access.”

SCANNING SOCIAL MEDIA, said the white screen, and then his phone vibrated. TACHIKOMA WANTS TO GET TO KNOW YOU, said the message. PLEASE SAY YES.

“This feels super shady, Kumar. Is this some sort of prank?”

“Just … trust me, OK. It’s an alpha build, it’s not out to the public yet. And don’t worry, I’m not looking at your sexting history here.”

He typed YES and hit send.

“After it does its thing, you tell it what you’re thinking of,” said Kumar. “You know. Working on a campaign, maybe you need ideas. Type in whatever is floating around in your mind at the time.”

“And?”

“You might get some answers.”

“Back up, back up,” said Romesh, feeling a headache coming on. “How does this work, exactly?”

“You know what a self-directing knowledge graph is? Generative transformer networks?”

“No idea.”

“Universal thesauri?”

“I can sell that if you pay me for it.”

“Well, there’s no point me telling you, is there,” said Kumar.

“You’re using me as a guinea pig, aren’t you?”

“Try it out,” said Kumar. “It might be a bit stupid when you start, but give it a few days. Drinks on me next time if you actually use the thing. Remember, tank, student, student, tank, your pick.” He hung up.

So it was with some unease that Romesh went back to the kitchen, brewing both coffee and ideas for the last Dulac ad. Swordplay, cleaning a perfect sword before battle, link to—teeth? body?—then product. He came back, typed those words into the Tachikoma prompt, which ate them and went back to its blinking self.

. . . .

To his surprise, there was a message waiting for him when he got back. SUNLIGHT, it said. CLEANSING FIRE.

Sunlight.

He scrolled down the message, where a complex iconography shifted around those words. Phrases and faces he’d used before. Sentiments.

He’d never thought of using sunlight. Swordplay, samurai cleaning a perfect sword before battle, sword glinting in the sun, outshining everything else—

A smile crept up Romesh’s jagged face. He put his steaming coffee down, feeling that old familiar lightning dancing around his mind, through his fingers, and set to work.

“DULAC CALLED,” THE Boss said at the end of the week. “That whole Cleansing Fire campaign we did.”

“Bad?” said Romesh, who had come to expect nothing good of these conversations.

“Depends,” said the Boss. “Sales have tripled. They’re insisting you stay in charge of that account.”

Romesh toyed with his mug a little.

“That was a bit underhanded,” said the Boss. “Good stuff, but showing off just so you could one-up the kid.”

“Perks of being old,” said Romesh. “We don’t play fair, we play smart.”

“Well,” said the Boss. “If I’d known pissing you off got results, I’d have done it years ago. Up for another account?”

There is a bunch more at Wired

To Hold Up the Sky

From The Wall Street Journal:

Cixin Liu’s “Remembrance of Earth’s Past” trilogy, which began with “The Three-Body Problem,” is arguably the most significant work of science fiction so far this century: full of ideas, full of optimism, enormous in scale. But, with more than 1,000 pages across three books, the series demands a high level of commitment from readers. Mr. Liu’s new story collection, “To Hold Up the Sky” (Tor, 334 pages, $27.99), shows us where he’s coming from, and how far he’s come.

The 11 stories here were all first published in China, some as long as 20 years ago. In his introduction, Mr. Liu denies that there is any systemic difference between Chinese and Western sci-fi. Both have the same underlying theme: the immense difference between the scale of humans as individuals and the scale of the universe around us. This shows in the first story, “The Village Teacher.” Its scenes shift from a mountain village, where a primary-school teacher lies on his deathbed, explaining Newton with his last breath, to a million-warship galactic war, in which Earth and humanity are about to be destroyed. Unless, that is, randomly selected samples, who happen to be from the old teacher’s last class, can prove humanity’s intelligence. Can the small, for once, confound the great?

The poverty scenes in this collection are moving in a way not normally found in sci-fi, but one has to say that the “casual elimination by aliens” trope was old by the time of “Hitchhiker’s Guide.” In “Full-Spectrum Barrage Jamming,” Mr. Liu imagines the final shootout between Russia and NATO, as it might have seemed back in 2001, when the story was first published. It’s a battlefield full of Abrams and T-90 tanks, as well as Comanche helicopters and a Russian orbital fort—but all of them are rendered useless by electronic counter-measures. So it’s back to bayonets. Done well, but the same development was at the heart of Gordon Dickson’s “Dorsai” stories a long generation ago.

. . . .

Mr. Liu’s strength is narrowing the large-scale tech down to agonizing issues for individuals. That could be us. 

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

The Ministry for the Future

From The Los Angeles Review of Books:

IT SEEMS PERVERSELY easier to tell a science fictional story about a world centuries in the future than the one just a few years away. Somehow we have become collectively convinced that massive world-historical changes are something that cannot happen in the short term, even as the last five years alone have seen the coronavirus pandemic; the emergence of CRISPR gene editing; too many droughts, hurricanes, and wildfires to count; the legalization of gay marriage in many countries, including the United States; mass shooting after mass shooting after mass shooting; the #MeToo and #BlackLivesMatter movements; the emergence of self-driving cars; Brexit; and the election of Donald Trump to the presidency of the United States. We are living through historic times — the most widely tumultuous period of transformation and catastrophe for the planet since the end of World War II, with overlapping political, social, economic, and ecological crises that threaten to turn the coming decades into hell on Earth — but it has not helped us to think historically, or to understand that no matter how hard we vote things are never going to “get back to normal.” Everything is different now.

Everything is always different, yes, fine — but everything is really different now.

The Ministry for the Future is Kim Stanley Robinson’s grimmest book since 2015’s Aurora, and likely the grimmest book he has written to date — but it is also one of his most ambitious, as he seeks to tell the story of how, given what science and history both tell us to be true, the rest of our lives could be anything but an endless nightmare. It is not an easy read, with none of the strategies of spatial or temporal distancing that make Mars or the Moon or the New York of 2140 feel like spaces of optimistic historical possibility; it’s a book that calls on us instead to imagine living through a revolution ourselves, as we are, in the here and now. Robinson, our culture’s last great utopian, hasn’t lost heart exactly — but he’s definitely getting deep down into the muck of things this time.

Link to the rest at The Los Angeles Review of Books

PG will note that, given the pace of traditional publishing, the ms. for this book was probably created a year or two ago.

How Science Fiction Works

From Public Books:

World-renowned science fiction novelist Kim Stanley Robinson is a world builder beyond compare. His political acumen makes his speculations feel alive in the present—as well as laying out a not-so-radiant future. He is the author of more than 20 novels and the repeat winner of most major speculative fiction prizes; his celebrated trilogies include Three Californias, Science in the Capitol, and (beloved in my household) the Mars Trilogy: Red, Green, and Blue.

. . . .

John Plotz (JP): You have said that science fiction is the realism of our times. How do people hear that statement today? Do they just hear the word COVID and automatically start thinking about dystopia?

Kim Stanley Robinson (KSR): People sometimes think that science fiction is about predicting the future, but that isn’t true. Since predicting the future is impossible, that would be a high bar for science fiction to have to get over. It would always be failing. And in that sense it always is failing. But science fiction is more of a modeling exercise, or a way of thinking.

Another thing I’ve been saying for a long time is something slightly different: We’re in a science fiction novel now, which we are all cowriting together. What do I mean? That we’re all science fiction writers because of a mental habit everybody has that has nothing to do with the genre. Instead, it has to do with planning and decision making, and how people feel about their life projects. For example, you have hopes and then you plan to fulfill them by doing things in the present: that’s utopian thinking. Meanwhile, you have middle-of-the-night fears that everything is falling apart, that it’s not going to work. And that’s dystopian thinking.

So there’s nothing special going on in science fiction thinking. It’s something that we’re all doing all the time.

And world civilization right now is teetering on the brink: it could go well, but it also could go badly. That’s a felt reality for everybody. So in that sense also, science fiction is the realism of our time. Utopia and dystopia are both possible, and both staring us in the face.

Let’s say you want to write a novel about what it feels like right now, here in 2020. You can’t avoid including the planet. It’s not going to be about an individual wandering around in their consciousness of themselves, which modernist novels often depict. Now there’s the individual and society, and also society and the planet. And these are very much science fictional relationships—especially that last one.

JP: When you think of those as science fictional relationships, where do you place other speculative genres, such as fantasy or horror? Do they sit alongside science fiction—in terms of its “realism”—or are they subsets?

KSR: No, they’re not subsets, more like a clustering. John Clute, who wrote the Encyclopedia of Science Fiction and a big part of the Encyclopedia of Fantasy, has a good term that he’s taken from Polish: fantastikaFantastika is anything that is not domestic realism. That could be horror, fantasy, science fiction, the occult, alternative histories, and others.

Among those, I’m interested mostly in science fiction. Which, being set in the future, has a historical relationship that runs back to the present moment.

Fantasy doesn’t have that history. It’s not set in the future. It doesn’t run back to our present in a causal chain.

So the moment I say that, you can bring up fantasies in which Coleridge runs into ghosts, or about time traveling, or whatever. Still, as a first cut, it’s a useful definition. But definitions are always a little troublesome.

Link to the rest at Public Books

Liu Cixin Writes Science Fiction Epics That Transcend the Moment

From The Wall Street Journal:

Science fiction can be hard to disentangle from the real world. Futuristic tales about advanced technology and clashing alien civilizations often read like allegories of present-day problems. It is tempting, then, to find some kind of political message in the novels of Liu Cixin, 57, China’s most famous science fiction writer, whose speculative and often apocalyptic work has earned the praise of Barack Obama and Mark Zuckerberg. The historian Niall Ferguson recently said that reading Mr. Liu’s fiction is essential for understanding “how China views America and the world today.”

But Mr. Liu insists that this is “the biggest misinterpretation of my work.” Speaking through an interpreter over Skype from his home in Shanxi Province, he says that his books, which have been translated into more than 20 languages, shouldn’t be read as commentaries on China’s history or aspirations. In his books, he maintains, “aliens are aliens, space is space.” Although he has acknowledged, in an author’s note to one of his books, that “every era puts invisible shackles on those who have lived through it,” he says that he writes science fiction because he enjoys imagining a world beyond the “narrow” one we live in. “For me, the essence of science fiction is using my imagination to fill in the gaps of my dreams,” says Mr. Liu.

In China, science fiction has often been inseparable from ideology. A century ago, early efforts in the genre were conspicuously nationalistic: “Elites used it as a way of expressing their hopes for a stronger China,” says Mr. Liu. But the 1966-76 Cultural Revolution banned science fiction as subversive, and critics in the 1980s argued that it promoted capitalist ideas. “After that, science fiction was discouraged,” Mr. Liu remembers.

In recent years, however, the genre has been making a comeback. This is partly because China’s breakneck pace of modernization “makes people more future-oriented,” Mr. Liu says. But the country’s science fiction revival also has quite a lot to do with Mr. Liu himself.

In 2015, he became the first Asian writer to win the Hugo Award, the most prestigious international science fiction prize. A 2019 adaptation of his short story “The Wandering Earth” became China’s third-highest-grossing film of all time, and a movie version of his bestselling novel “The Three-Body Problem” is in the works. His new book, “To Hold Up the Sky,” a collection of stories, will be published in the U.S. in October. (His American books render his name as Cixin Liu, with the family name last, but Chinese convention is to put the family name first.)

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Building Character: Writing a Backstory for Our AI

From The Paris Review:

Eliza Doolittle (after whom the iconic AI therapist program ELIZA is named) is a character of walking and breathing rebellion. In George Bernard Shaw’s Pygmalion, and in the musical adaptation My Fair Lady, she metamorphoses from a rough-and-tumble Cockney flower girl into a self-possessed woman who walks out on her creator. There are many such literary characters that follow this creator-creation trope, eventually rejecting their creator in ways both terrifying and sympathetic: after experiencing betrayal, Frankenstein’s monster kills everyone that Victor Frankenstein loves, and the roboti in Karel Capek’s Rossum’s Universal Robots rise up to kill the humans who treat them as a slave class.

It’s the most primordial of tales, the parent-child story gone terribly wrong. We’ve long been captivated by the idea of creating new nonhuman life, and equally captivated by the punishment we fear such godlike powers might trigger. In a world of growing AI beings, such dystopian outcomes are becoming real fears. As we set out to create these alternate beings, the questions of how we should design them, what they should be crafted to say and do, become questions of not only art and science but morality.

. . . .

But morality has no resonance unless the art rings true. And, as I’ve argued before, we want AI interactions that are not just helpful but beautiful. While there is growing discussion of functional and ethical considerations in AI development, there are currently few creative guidelines for shaping those characters. Many AI designers sit down and begin writing simple scripts for AI before they ever consider the larger picture of what—or who—they are creating. For AI to be fully realized, like fictional characters, they need a rich backstory. But an AI is not quite the same as a fictional character; nor is it a human. An AI is something between fictional and real, human and machine. For now, its physical makeup is inorganic—it consists not of biological but of machine material, such as silicon and steel. At the same time, AI differs from pure machine (such as a toaster or a calculator) in its “artificially” humanistic features. An AI’s mimetic nature is core to its identity, and these anthropomorphic features, such as name, speech, physical form, or mannerisms, allow us to form a complex relationship to it.

. . . .

Similar to a birth story for a human or fictional character, AI needs a strong origin story. In fact, people are even more curious about an AI origin story than a human one. One of the most important aspects of an AI origin story is who its creator is. The human creator is the “parent” of the AI, so his or her own story (background, personality, interests) is highly relevant to an AI’s identity. Preliminary studies at Stanford University indicate that people attribute an AI’s authenticity to the trustworthiness of its maker. Other aspects of the origin story might be where the AI was built, i.e., in a lab or in a company, and stories around its development, perhaps “family” or “siblings” in the form of other co-created AI or robots. Team members who built the AI together are relevant as co-creators who each leave their imprint, as is the town, country, and culture where the AI was created. The origin story informs those ever-important cultural references. And aside from the technical, earthly origin story for the AI, there might be a fictional storyline that explains some mythical aspects of how the AI’s identity came to be—for example, a planet or dimension the virtual identity lived in before inhabiting its earthly form, or a Greek-deity-like organization involving fellow beings like Jarvis or Siri or HAL. A rich and creative origin story will give substance to what may later seem like arbitrary decisions around the AI personality—why, for example, it prefers green over red, is obsessed with ikura, or wants to learn how to whistle.

. . . .

AI should be designed with a clear belief system. This forces designers to think about their own values, and may allay public fears about a society of “amoral” AI. We all have belief systems, whether we can articulate them or not. They drive our behaviors and thoughts and decision-making. As we see in literature, someone who believes “I must make my fate” will behave and speak differently from one who believes “Fate has already decided for me”—and their lives and storylines will unfold accordingly. AI characters should be created with a belief system somewhat akin to a mission statement. Beliefs about purpose, life, other people, will give the AI a system around which to organize decision-making. Beliefs can be both programmed and adopted. Programmed beliefs are ones that the designers and writers code into the AI. Adopted beliefs would evolve as a combination of programming and additional data the AI accumulates as it begins to experience life and people. For example, an AI may be coded with the programmed belief “Serving people is the greatest purpose.” As it takes in data that would challenge this belief (i.e., interacting with rude, greedy, inconsiderate people), this data would interact with another algorithm, such as high resilience and optimism, and would form a new, related, adopted belief: “Humans are under a lot of stress so many not always act nicely. This should not change the way I treat them.”

Link to the rest at The Paris Review

Science Fiction Epics That Transcend the Moment

From The Wall Street Journal:

Science fiction can be hard to disentangle from the real world. Futuristic tales about advanced technology and clashing alien civilizations often read like allegories of present-day problems. It is tempting, then, to find some kind of political message in the novels of Liu Cixin, 57, China’s most famous science fiction writer, whose speculative and often apocalyptic work has earned the praise of Barack Obama and Mark Zuckerberg. The historian Niall Ferguson recently said that reading Mr. Liu’s fiction is essential for understanding “how China views America and the world today.”

But Mr. Liu insists that this is “the biggest misinterpretation of my work.” Speaking through an interpreter over Skype from his home in Shanxi Province, he says that his books, which have been translated into more than 20 languages, shouldn’t be read as commentaries on China’s history or aspirations. In his books, he maintains, “aliens are aliens, space is space.” Although he has acknowledged, in an author’s note to one of his books, that “every era puts invisible shackles on those who have lived through it,” he says that he writes science fiction because he enjoys imagining a world beyond the “narrow” one we live in. “For me, the essence of science fiction is using my imagination to fill in the gaps of my dreams,” says Mr. Liu.

In China, science fiction has often been inseparable from ideology. A century ago, early efforts in the genre were conspicuously nationalistic: “Elites used it as a way of expressing their hopes for a stronger China,” says Mr. Liu. But the 1966-76 Cultural Revolution banned science fiction as subversive, and critics in the 1980s argued that it promoted capitalist ideas. “After that, science fiction was discouraged,” Mr. Liu remembers.

In recent years, however, the genre has been making a comeback. This is partly because China’s breakneck pace of modernization “makes people more future-oriented,” Mr. Liu says. But the country’s science fiction revival also has quite a lot to do with Mr. Liu himself.

In 2015, he became the first Asian writer to win the Hugo Award, the most prestigious international science fiction prize. A 2019 adaptation of his short story “The Wandering Earth” became China’s third-highest-grossing film of all time, and a movie version of his bestselling novel “The Three-Body Problem” is in the works. His new book, “To Hold Up the Sky,” a collection of stories, will be published in the U.S. in October. (His American books render his name as Cixin Liu, with the family name last, but Chinese convention is to put the family name first.)

. . . .

His first book appeared in 1989, and for years he wrote while working as an engineer at a state-owned power plant. The publication of “The Three-Body Problem,” in 2006, made him famous, and after a pollution problem shut the plant down in 2010, he devoted himself to writing full-time.

Mr. Liu’s renowned trilogy “Remembrance of Earth’s Past,” published in China between 2006 and 2010, tells the story of a war between humans on Earth and an alien civilization called the Trisolarans who inhabit a planet in decline. The story begins in the 1960s, in the years of the Cultural Revolution, and eventually zooms millions of years into the future. The aliens’ technological superiority and aggressive desire to exploit Earth’s resources have made some readers see them as a metaphor for the colonial Western powers China struggled against for more than a century. But Mr. Liu says this is too limited a view of his intentions. What makes science fiction “so special,” he says, is that its narratives often encourage us to “look past boundaries of nations and cultures and races, and instead really consider the fate of humankind as a whole.”

The English version of “The Three-Body Problem,” the first book in the trilogy, differs from the original in a small but telling way. In this 2014 translation, the story begins with an episode from the Cultural Revolution, in which a character’s father is publicly humiliated and killed for his “reactionary” views. The translator Ken Liu (no relation to the author) moved the scene to the start of the book from the middle, where Mr. Liu admits he had buried it in the original Chinese because he was wary of government censor

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

The Gaming Mind

From The Wall Street Journal:

Videogame fans were elated in April when developer Square Enix released its long-awaited remake of “Final Fantasy VII,” considered by many to be one of the greatest games of all time. The original game, released in 1997 for Playstation, had everything: an expansive story across three playable discs; an engaging battle system replete with magic spells; and a cast of compelling characters, not least the game’s iconic hero, the spiky-haired Cloud Strife (and his nemesis, Sephiroth). I have fond memories of playing FFVII in my youth. Having the chance this spring, stuck inside during the pandemic, to revisit an expanded and upgraded version of this childhood touchstone was greatly satisfying.

Alexander Kriss has also been enthralled by videogames. In “The Gaming Mind,” Mr. Kriss, a clinical psychologist in New York, describes playing “Silent Hill 2” as a teenager. “I played its twelve-hour runtime back-to-back, probably a dozen times,” he says. “I discussed it exhaustively on message boards behind the veil of online anonymity.” For all its grim subject matter—the protagonist is a widower who visits a haunted town in search of his dead wife, doing battle with monsters along the way—the game proved a balm for Mr. Kriss, who had recently lost a friend to suicide. “My relationship with Silent Hill 2 reflected who I was and what I was going through, not only because of what I played but how I played it.”

“Silent Hill 2” is one of a number of games that figure in Mr. Kriss’s book, which brings a critical sensibility—his chapter headings have epigraphs from the likes of Ursula K. Le Guin and Saul Bellow—to videogames. The author is quite entertaining when holding forth on specific titles. He describes “Minecraft,” in which players build structures out of blocks, as “a vast, virtual sandpit” where “everything has a pixelated, low-resolution quality, as if drawn from an earlier generation of videogames when technology was too limited to make things appear vivid and realistic.”

“The Gaming Mind” seeks in part to dismantle the stigma that surrounds videogames and the archetypal “gamer kid,” a term Mr. Kriss dislikes. Much of the book recounts the author’s experience in therapy sessions, in which discussions of his patients’ videogame habits provided a basis for a breakthrough. The book also works in some early history of the industry, delves into the debate over whether videogames cause real-world violence (Mr. Kriss thinks these claims are wildly exaggerated) and parses the differences between various types of games.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Don’t Forget the H

From SFWA:

The horror genre is undergoing a renaissance these days, with audiences devouring popular and critically acclaimed books, movies, and television series. If you’re a science fiction or fantasy writer who’d like to add more horror to your authorial toolbox, but you’re not quite sure how to go about it, you’re in luck, because that’s what this article is all about.

A lot of people’s views on horror have been shaped by slasher films, simplistic predator-stalks-prey stories with lots of blood and sex. But the genre of horror performs some very important functions for its audience beyond providing simple scares. Horror is a way for us to face our fears and come to terms with death and the “evil” in the world. Through horror, we explore, confront, and (hopefully) make peace with our dark side. And as a particular benefit for writers, horror can add a different level of suspense and emotional involvement for readers in any story.

Good horror is internal more than external. Horror stories are reaction stories. They’re not about monsters or monstrous forces as much as how characters react to monsters (or to becoming monsters themselves). Horror also thrives on fear of the unknown, so you should strive to avoid standard horror tropes such as bloodthirsty vampires or demon-possessed children, or rework them to make them more original and impactful for readers. Maybe your vampire is a creature that feeds on people’s memories, or maybe your possessed child is an android created to be a child’s companion who’s desperately trying to repel a hacker’s efforts to take over its system. Reworking a trope — dressing it in new clothes, so to speak — allows you to reclaim the power of its core archetype while jettisoning the cliched baggage it’s picked up over the years.

Link to the rest at SFWA

Previously, PG used the acronym SWFA instead of SFWA.

That’s the first mistake he’s made in the last five years and he apologizes immoderately.

Plans to Stitch a Computer into Your Brain

What could go wrong?

From Wired:

ELON MUSK DOESN’T think his newest endeavor, revealed Tuesday night after two years of relative secrecy, will end all human suffering. Just a lot of it. Eventually.

At a presentation at the California Academy of Sciences, hastily announced via Twitter and beginning a half hour late, Musk presented the first product from his company Neuralink. It’s a tiny computer chip attached to ultrafine, electrode-studded wires, stitched into living brains by a clever robot. And depending on which part of the two-hour presentation you caught, it’s either a state-of-the-art tool for understanding the brain, a clinical advance for people with neurological disorders, or the next step in human evolution.

The chip is custom-built to receive and process the electrical action potentials—“spikes”—that signal activity in the interconnected neurons that make up the brain. The wires embed into brain tissue and receive those spikes. And the robotic sewing machine places those wires with enviable precision, a “neural lace” straight out of science fiction that dodges the delicate blood vessels spreading across the brain’s surface like ivy.

If Neuralink’s technologies work as Musk and his team intend, they’ll be able to pick up signals from across a person’s brain—first from the motor cortex that controls movement but eventually throughout your think-meat—and turn them into machine-readable code that a computer can understand.

. . . .

“It’s not as if Neuralink will suddenly have this incredible neural lace and take over people’s brains. It will take a long time.”

Link to the rest at Wired

8 Anti-Capitalist Sci-Fi and Fantasy Novels

From Electric Lit:

Karl Marx may be famous for his thorough, analytic attack on capitalism (see: all three volumes and the 1000-plus pages of Das Kapital), but let’s be real: it’s not the most exciting to read. What if, just as a thought experiment, our works that reimagined current structures of power also had robots?

Speculative fiction immerses the reader in an alternate universe, hooking us in with a stirring narrative and intricate world-building—or the good stories do, anyways. Along the way, it can also challenge us to take a good look at our own reality, and question with an imaginative, open mind: how can we strive to create social structures that are not focused on white, patriarchal, cisgendered, and capitalist systems of inequity? 

As poet Lucille Clifton says, “We cannot create what we can’t imagine.” Imagination is an integral element to envisioning concrete change, one that goes hand-in-hand with hope. Although certain magical elements like talking griffins and time travel might be out of reach (at least for the present moment), fantasy and sci-fi novels allow us to imagine worlds that we can aspire towards. Whether through a satire that exposes the ridiculousness of banking or a steampunk rewriting of the Congo’s history, the authors below have found ways to critically examine capitalism—and its alternatives—in speculative fiction. 

Everfair by Nisi Shawl

A speculative fantasy set in neo-Victorian times, Shawl’s highly-acclaimed novel imagines “Everfair,” a safe haven in what is now the Democratic Republic of the Congo. In Shawl’s version of the late 19th-century, the Fabian Socialists—a real-life British group—and African-American missionaries band together to purchase a region of the Congo from King Leopold II (whose statue was recently defaced and removed from Antwerp, as a part of the global protest against racism). This region, Everfair, is set aside for formerly enslaved people and refugees, who are fleeing from King Leopold II’s brutal, exploitative colonization of the Congo. The residents of Everfair band together to try and create an anti-colonial utopia. Told from a wide range of characters and backed up with meticulous research, Shawlcreates a kaleidoscopic, engrossing, and inclusive reimagination of what history could have been. “I had been confronted with the idea that steampunk valorized colonization and empire, and I really wanted to spit in its face for doing that,” Shawl statesthrough her rewritten history of the Congo, Shawl challenges systems of imperialism and capitalism. 

. . . .

Making Money by Terry Pratchett

If you stop to think about it, isn’t the concept of a credit card ridiculous? Pratchett’s characters would certainly agree. Pratchett’s Discworld series, as the Guardian noted, “started out as a very funny fantasy spoof [that] quickly became the finest satirical series running.” This installment follows con-man Moist von Lipwig (who first appeared in Pratchett’s spoof on the postal system, Going Postal), as he gets roped into the world of banking. The Discworld capital, Ankh-Morpork, is just being introduced to—you guessed it—paper money. However, citizens remain distrustful of the new system, opting for stamps as currency rather than use the Royal Mint. Cue the Financial Revolution, with Golem Trust miscommunications, a Chief Cashier that may be a vampire, and banking chaos. In his signature satirical style, Pratchett points out the absurdities of the modern financial system we take for granted.

Link to the rest at Electric Lit

PG has three immediate thoughts.

  1. Just about anything can serve as a theme for a fantasy novel and authors are perfectly free to riff on any topic they may choose.
  2. Das Kapital was a factual and logical mess, but an excellent pseudo-economic basis for gaining and keeping power in the hands of its foremost practitioners. The book was first published in 1867 by Verlag von Otto Meisner, which sounds a bit capitalist (and aristocratic) to PG.
  3. Each of the Anti-Capitalist books is published by a thoroughly-capitalist publisher and PG is almost completely certain that each of the authors received an advance against royalties for the book that would feed an impoverished African village for at least a few months.

17 of the Most Devious Sci-Fi and Fantasy Villains

From BookBub:

While incredible power and a raging desire to destroy things are good qualities to find in a sci-fi or fantasy villain, the ability to plot, scheme, and patiently wait until the right time to destroy your enemies elevates your typical villains to a whole new level. So it’s little wonder that any list of the best sci-fi and fantasy villains will also be a list of the most devious villains. All of the evil beings (and entities) on this list are ingeniously formidable foes. And, frankly, we love them for it.

Baron Vladimir Harkonnen — Dune

Truly devious villains know that sometimes you have to make an apparent sacrifice in order to arrange the playing board to your advantage. Baron Vladimir Harkonnen, who brought his disgraced house back into power and influence by sheer force of will, plays this trick on the noble House Atreides, his rivals, when he gives up control over Arrakis and the all-important melange trade to them. But this is just the beginning of Harkonnen’s genius and horrifying plan to destroy his enemies and gain power for his own infamous house. 

. . . .

The Aesi — Black Leopard, Red Wolf

The Aesi is a terrifying being dispatched by King Kwash Dara to thwart the band of improbable protagonists in this novel. Able to enter dreams, control minds, and send assassins made of dust, the Aesi is perhaps the most terrifying creature in a book absolutely filled with terrifying creatures. (You don’t get a nickname like ‘The god butcher’ for nothing.) But the true devious nature of the Aesi is revealed in a twist that makes it more terrifying than ever.

Link to the rest at BookBub

Theatrical Shortcuts for Dynamic Fiction

From SWFA:

I’m often asked if my professional theatre and playwrighting background helps me as a fiction writer. It does in countless ways. Theatrical form, training, and structure are holistically integrated into how I see the world and operate as a storyteller. I adore diving deep into character, creating atmosphere, and ‘setting the stage’ for my novels. I became a traditionally published novelist many years after I’d established myself on stage and published as a playwright.

I teach a workshop called “Direct Your Book: Theatre Techniques Towards A Blockbuster Novel” about using theatrical concepts to invigorate, inspire, and problem-solve in fiction writing. Here’s what I’ve found to be the most consistently useful takeaways:

Physicality. One of my favorite aspects of character building when taking on a role is figuring out how they move; where their “center of gravity” is, whether the gut, the chest, or the head; what part of their body leads the way? Thinking about this can really ground you in the bodies of your characters and how they interact with their world.

Environment. I’m a licensed New York City tour guide and there’s really nothing like moving through the streets your characters move through and truly living in all those details. In my Spectral City series, I utilize many of the city’s most haunted paths as the routes my psychic medium heroine takes to navigate the city. Her noting the various haunts of the city creates a sort of ‘lived in’ feel to the prose and to her experiences as a psychic detective. There is something to be said sometimes for writing ‘what you know’. If at all possible, visiting a place that informs your world directly, or inspires it if your world is a secondary one, can add so much in detail and expansive sensory experience. You can pair the experience of walking and drinking in this environment by thinking of the characters’ physicality and qualities of movement as you do so.

Clothing. Even if it isn’t a period piece, clothing tells a lot about a world and how characters live in it. Every clothing choice is an act of world-building. If your work is historical or historically informed, I suggest spending time in clothing from the time period. Try to rent something or commission something you could walk, run, move, and interact in for a period of time that helps you understand how garments inform movement, posture, breathing, existing. These things change radically across class and area of the world. For my part, as most of my novels are set in the late 19th century, the most important gift the theatre gave my historical novels is a tactile reality and personal experience ‘existing’ in other time periods with which I can paint details. In the 19th century, for example, women could be wearing an average of 40 pounds of clothing and that significantly affects one’s daily life. Knowing what it is like to move, sit, prepare food, lift, climb stairs, walk, trot, run, seize, weep, laugh, recline, jump and collapse in a corset, bodice, bustle, petticoat, hat, layers, gloves, and other accessories–all of which I’ve personally experienced in various historical plays and presentations I’ve acted in–is vitally important to taking the reader physically as well as visually and emotionally through a character’s experience. It changes breathing, posture, and interactions with the environment and others in a core, defining way.

Link to the rest at SWFA

Samuel R. Delany, The Art of Fiction

From The Paris Review:

The first time I interview Samuel Delany, we meet in a diner near his apartment on New York’s Upper West Side. It is a classic greasy spoon that serves strong coffee and breakfast all day. We sit near the window, and Delany, who is a serious morning person, presides over the city as it wakes. Dressed in what is ­often his uniform—black jeans and a black button-down shirt, ear pierced with multiple rings—he looks imperial. His beard, dramatically long and starkly white, is his most distinctive feature. “You are ­famous, I can just tell, I know you from somewhere,” a stranger tells him in the 2007 docu­mentary Polymath, or the Life and Opinions of Samuel R. Delany, Gentleman. Such intrusions are common, because Delany, whose work has been described as limitless, has lived a life that flouts the conventional. He is a gay man who was married to a woman for twelve years; he is a black man who, because of his light complexion, is regularly asked to identify his ethnicity. Yet he seems hardly bothered by such attempts to figure him out. Instead, he laughs, and more often than not it is a quiet chuckle expressed mostly in his eyes.

Delany was born on April 1, 1942, in Harlem, by then the cultural epicenter of black America. His father, who had come to New York from Raleigh, North Carolina, ran Levy and Delany, a funeral home to which Langston Hughes refers in his stories about the neighborhood. Delany grew up above his father’s business. During the day he attended Dalton, an elite and primarily white prep school on the Upper East Side; at home, his mother, a senior clerk at the New York Public Library’s Countee Cullen branch, on 125th Street, nurtured his exceptional intelligence and kaleidoscopic interests. He sang in the choir at St. Philip’s, Harlem’s black Episcopalian church, composed atonal music, played multiple instruments, and choreographed dances at the General Grant Community Center. In 1956, he earned a spot at the Bronx High School of Science, where he would meet his future wife, the poet Marilyn Hacker.

In the early sixties, the newly married couple settled in the East Village. There, Delany wrote his first novel, The Jewels of Aptor. He was nineteen. Over the next six years, he published eight more science-fiction novels, among them the Nebula Award winners Babel-17 (1966) and The Einstein Intersection (1967). 

. . . .

In 1971, he completed a draft of a book he had been reworking for years. Dhalgren, his story of the Kid, a schizoid, amnesiac wanderer, takes place in Bellona, a shell of a city in the American Midwest isolated from the rest of the world and populated by warring gangs and holographic beasts. When Delany, Hacker, and their one-year-old daughter flew back to the States just before Christmas Eve in 1974, they saw copies of Dhalgren filling book racks at Kennedy Airport even before they reached customs. Over the next decade, the novel sold more than a million copies and was called a master­piece by some critics. William Gibson famously described it as “a riddle that was never meant to be solved.”

. . . .

INTERVIEWER

Between the time you were nineteen and your twenty-second birthday, you wrote and sold five novels, and another four by the time you were twenty-six, plus a volume of short stories. Fifty years later, considerably more than half that work is still in print. Was being a prodigy important to you?

DELANY

As a child I’d run into Wilde’s witticism “The only true talent is preco­ciousness.” I took my writing seriously, and it seemed to pay off. And I ­discovered Rimbaud. The notion of somebody just a year or two older than I was, who wrote poetry people were reading a hundred, a hundred fifty years later and who had written the greatest poem in the French ­language, or at least the most famous one, “Le Bateau Ivre,” when he was just sixteen—that was enough to set my imagination soaring. At eighteen I translated it.

In the same years, I found the Signet paperback of Radiguet’s Devil in the Flesh and, a few months after that, the much superior Le Bal du Comte d’Orgel, translated as Count d’Orgel in the first trade paperback from Grove Press, with Cocteau’s deliciously suggestive “introduction” about its tragic young author, salted with such dicta as “Which family doesn’t have its own child prodigy? They have invented the word. Of course, child prodigies ­exist, just as there are extraordinary men. But they are rarely the same. Age means nothing. What astounds me is Rimbaud’s work, not the age at which he wrote it. All great poets have written by seventeen. The greatest are the ones who manage to make us forget it.”

Now that was something to think about—and clearly it had been said about someone who had not expected to die at twenty of typhoid from eating bad oysters.

. . . .

INTERVIEWER

Do you think of yourself as a genre writer?

DELANY

I think of myself as someone who thinks largely through writing. Thus I write more than most people, and I write in many different forms. I think of myself as the kind of person who writes, rather than as one kind of writer or another. That’s about the closest I come to categorizing myself as one or another kind of artist.

Link to the rest at The Paris Review

Here’s a link to the Samuel R. Delany Author Page on Amazon (where his photo shows a world-class beard)

The English towers and landmarks that inspired Tolkien’s hobbit sagas

From The Guardian:

Readers of The Lord of the Rings must surely imagine lifting their eyes in terror before Saruman’s dark tower, known as Orthanc. Over the years, many admirers of the Middle-earth sagas have guessed at the inspiration for this and other striking features of the landscape created by JRR Tolkien.

Now an extensive new study of the author’s work is to reveal the likely sources of key scenes. The idea for Saruman’s nightmarish tower, argues leading Tolkien expert John Garth, was prompted by Faringdon Folly in Berkshire.

“I have concentrated on the places that inspired Tolkien and though that may seem a trivial subject, I hope I have brought some rigour to it,” said Garth this weekend. “I have a fascination for the workings of the creative process and in finding those moments of creative epiphany for a genius like Tolkien.”

A close study of the author’s life, his travels and his teaching papers has led Garth to a fresh understanding of an allegory that Tolkien regularly called upon while giving lectures in Old English poetry at Oxford in the 1930s.

Comparing mysteries of bygone poetry to an ancient tower, the don would talk of the impossibility of understanding exactly why something was once built. “I have found an interesting connection in his work with the folly in Berkshire, a nonsensical tower that caused a big planning row,” Garth explains. While researching his book he realised the controversy raging outside the university city over the building would have been familiar to Tolkien.

Tolkien began to work this story into his developing Middle-earth fiction, finally planting rival edifices on the Tower Hills on the west of his imaginary “Shire” and also drawing on memories of other real towers that stand in the Cotswolds and above Bath. “Faringdon Folly isn’t a complete physical model for Orthanc,” said Garth. “It’s the controversy surrounding its building that filtered into Tolkien’s writings and can be traced all the way to echoes in the scene where Gandalf is held captive in Saruman’s tower.”

Link to the rest at The Guardian

Laura Lam: The Gut Punch Of Accidentally Predicting The Future

From Terrible Minds:

I thought Terrible Minds would be the place to talk about the strange, horrible feeling of accidentally predicting the future, since Chuck did it too with Wanderers.

It happens to pretty much any science fiction writer who writes in the near future. Worldbuilding is basically extrapolating cause and effect in different ways. You see a news article somewhere like Futurism and you give a little chuckle—it’s something happening that you predicted in a book, and it’s a strange sense of déjà vu. I used to even share some of the articles with the hashtag #FalseHeartsIRL when I released some cyberpunks a few years ago. I can’t do that with Goldilocks, really, because the stuff I predicted isn’t some interesting bit of tech or a cool way to combat climate change through architecture or urban planning.

Because this time it’s people wearing masks outside. It’s abortion bans. It’s months of isolation. It’s a pandemic.

In real life, it’ll rarely play out exactly as you plan in a book. Some things twist or distort or are more unrealistic than you’d be allowed to put into fiction (e.g. murder wasps or anything that the orange man in the white house utters). In Goldilocks, I have people wearing masks due to climate change being a health risk, which was inspired by how disconcerted I felt seeing a photo of my mother wearing a mask due to the wildfires in California while I live in Scotland.

. . . .

Five women steal a spaceship to journey to Cavendish, a planet 10 light years away and humanity’s hope for survival and for a better future. A planet they hopefully won’t spoil like the old one. It’ll take the Atalanta 5 a few months to journey to Mars to use the test warp ring to jump to Epsilon Eridani (the real star for my fake planet), and then a few more months’ travel on the other side. It’s a long time to be with the same people. I did not expect those elements of how the women cope with isolation to be a how-to for 2020. I read a lot of astronaut memoirs, and that has probably helped me cope with lockdown a bit better than I might have (my top rec is Chris Hadfield’s An Astronaut’s Guide to Life on Earth).

Though it’s a mild spoiler, in light of current events I have been warning people that there is a pandemic in the book. It’s not a huge focus of the plot and it never gets graphic, but I forwarded an article about coronavirus to my editor on January 22nd with basically a slightly more professional version of ‘shit.’ The illness within the book is not quite as clear of an echo as White Mask, it’s still strange. The last thing I expected when I wrote a book with a pandemic was to have its launch interrupted by an actual pandemic.

You don’t feel clever, or proud, when you predict these sorts of things. You feel guilty when you see the nightmares about the future come true instead of the dreams.

Link to the rest at Terrible Minds

Will Fantasy Ever Let Black Boys Like Me Be Magic?

From Tor.com:

My first book on magic was A Wizard of Earthsea by Ursula K. Le Guin. It was a single story which expanded into a long-standing series about Ged, the greatest wizard known to his age, and the many mistakes made in his youth which inspired a battle against his dark side, before he righted himself with his darkness.

As a Black boy, I always had a fascination with stories of boys with more to offer than what the world had the ability to see in them. Le Guin offered something along that line—the fantasy of untapped potential, of surviving poverty, of coming to terms with one’s dark side.

However, Ged’s story isn’t what substantiated my attachment to Ursula K. Le Guin’s world; it was Vetch, the Black wizard of the story and Ged’s sidekick. In A Wizard of Earthsea, Vetch is first introduced through a bully named Jasper as a heavy-set, dark skinned wizard a few years older than Ged. Vetch was described as “plain, and his manners were not polished,” a trait that stood out even amongst a table of noisy boys. Unlike the other boys, he didn’t take much to the drama of showmanship, or of hazing and—when the time finally came—he abandoned his good life as a powerful wizard and lord over his servants and siblings to help Ged tame his shadow, then was never seen again.

Black wizards have always been an enigma. I picked up A Wizard of Earthsea years after Harry Potter graced the silver screen and of course, I’d seen Dean Thomas, but there was more to the presentation of Vetch than illustrated in Dean’s limited time on screen.

. . . .

Fantasy has a habit of making Black characters the sidekick. And yet, years after Ged journeyed away from his closest friend, Vetch’s life did not stop: it moved on, prosperously. Representation of Blackness has always been a battle in Fantasy. It isn’t that the marginalized have never found themselves in these stories, but there was always a story written within the margins.

Writing from the perspective of mainstream demographic often results in the sometimes unintentional erasing of key aspects of a true human experience: where you can be angry, internally, at harmful discrimination and you can do something selfish and negative, because its what you feel empowers you. If to be marginalized is to not be given permission to be fully human, then these Black characters (Vetch & Dean Thomas) have never escaped the margins; and if this act is designated as the “right way,” then no character ever will, especially not the ones we see as true change in our imaginations.

Link to the rest at Tor.com

The Danger of Intimate Algorithms

PG thought this might be an interesting writing prompt for sci-fi authors.

From Public Books:

After a sleepless night—during which I was kept awake by the constant alerts from my new automatic insulin pump and sensor system—I updated my Facebook status to read: “Idea for a new theory of media/technology: ‘Abusive Technology.’ No matter how badly it behaves one day, we wake up the following day thinking it will be better, only to have our hopes/dreams crushed by disappointment.” I was frustrated by the interactions that took place between, essentially, my body and an algorithm. But perhaps what took place could best be explained through a joke:

What did the algorithm say to the body at 4:24 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 5:34 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 6:39 a.m.?

“Calibrate now.”

And what did the body say to the algorithm?

“I’m tired of this s***. Go back to sleep unless I’m having a medical emergency.”

Although framed humorously, this scenario is a realistic depiction of the life of a person with type 1 diabetes, using one of the newest insulin pumps and continuous glucose monitor (CGM) systems. The system, Medtronic’s MiniMed 670G, is marketed as “the world’s first hybrid closed loop system,” meaning it is able to automatically and dynamically adjust insulin delivery based on real-time sensor data about blood sugar. It features three modes of use: (1) manual mode (preset insulin delivery); (2) hybrid mode with a feature called “suspend on low” (preset insulin delivery, but the system shuts off delivery if sensor data indicates that blood sugar is too low or going down too quickly); and (3) auto mode (dynamically adjusted insulin delivery based on sensor data).

In this context, the auto mode is another way of saying the “algorithmic mode”: the machine, using an algorithm, would automatically add insulin if blood sugar is too high and suspend the delivery of insulin if blood sugar is too low. And this could be done, the advertising promised, in one’s sleep, or while one is in meetings or is otherwise too consumed in human activity to monitor a device.  Thanks to this new machine, apparently, the algorithm would work with my body. What could go wrong?

Unlike drug makers, companies that make medical devices are not required to conduct clinical trials in order to evaluate the side effects of these devices prior to marketing and selling them. While the US Food and Drug Administration usually assesses the benefit-risk profile of medical devices before they are approved, often risks become known only after the devices are in use (the same way bugs are identified after an iPhone’s release and fixed in subsequent software upgrades). The FDA refers to this information as medical device “emerging signals” and offers guidance as to when a company is required to notify the public.

As such, patients are, in effect, exploited as experimental subjects, who live with devices that are permanently in beta. And unlike those who own the latest iPhone, a person who is dependent on a medical device—due to four-year product warranties, near monopolies in the health care and medical device industry, and health insurance guidelines—cannot easily downgrade, change devices, or switch to another provider when problems do occur.

It’s easy to critique technological systems. But it’s much harder to live intimately with them. With automated systems—and, in particular, with networked medical devices—the technical, medical, and legal entanglements get in the way of more generous relations between humans and things.

. . . .

In short, automation takes work. Specifically, the system requires human labor in order to function properly (and this can happen at any time of the day or night). Many of the pump’s alerts and alarms signal that “I need you to do something for me,” without regard for the context. When the pump needs to calibrate, it requires that I prick my finger and test my blood glucose with a meter in order to input more accurate data. It is necessary to do this about three or four times per day to make sure that the sensor data is accurate and the system is functioning correctly. People with disabilities such as type 1 diabetes are already burdened with additional work in order to go about their day-to-day lives—for example, tracking blood sugar, monitoring diet, keeping snacks handy, ordering supplies, going to the doctor. A system that unnecessarily adds to that burden while also diminishing one’s quality of life due to sleep deprivation is poorly designed, as well as unjust and, ultimately, dehumanizing.

. . . .

The next day was when I posted about “abusive technologies.” This post prompted an exchange about theorist Lauren Berlant’s “cruel optimism,” described as a relation or attachment in which “something you desire is actually an obstacle to your flourishing.” 

. . . .

There are many possible explanations for the frequent calibrations, but even the company does not have a clear understanding of why I am experiencing them. For example, with algorithmic systems, it has been widely demonstrated that even the engineers of these systems do not understand exactly how they make decisions. One possible explanation is that my blood sugar data may not fit with the patterns in the algorithm’s training data. In other words, I am an outlier. 

. . . .

In the medical field, the term “alert fatigue” is used to describe how “busy workers (in the case of health care, clinicians) become desensitized to safety alerts, and consequently ignore—or fail to respond appropriately—to such warnings.”

. . . .

And doctors and nurses are not the only professionals to be constantly bombarded and overwhelmed with alerts; as part of our so-called “digital transformation,” nearly every industry will be dominated by such systems in the not-so-distant future. The most oppressed, contingent, and vulnerable workers are likely to have even less agency in resisting these systems, which will be used to monitor, manage, and control everything from their schedules to their rates of compensation. As such, alerts and alarms are the lingua franca of human-machine communication.

. . . .

Sensors and humans make strange bedfellows indeed. I’ve learned to dismiss the alerts while I’m sleeping (without paying attention to whether they indicate a life-threatening scenario, such as extreme low blood sugar). I’ve also started to turn off the sensors before going to bed (around day four of use) or in the middle of the night (as soon as I realize that the device is misbehaving).

. . . .

Ultimately, I’ve come to believe that I am even “sleeping like a sensor” (that is, in shorter stretches that seem to mimic the device’s calibration patterns). Thanks to this new device, and its new algorithm, I have begun to feel a genuine fear of sleeping.

Link to the rest at Public Books

Why Am I Reading Apocalyptic Novels Now?

From The New York Times:

A man and his son trudge through the wasteland into which human civilization has devolved. Every night, they shiver together in hunger and cold and fear. If they encounter someone weaker than they are — an injured man, an abandoned child — they do not have the resources to help, and if they encounter someone stronger, violence is assured. The man lives for the child, and the child regularly expresses a desire for death.

I am describing the novel “The Road,” by Cormac McCarthy. The last time I can remember being hit so hard by a work of fiction was decades ago, reading “The Brothers Karamazov” while I had a high fever: I hallucinated the characters. I can still remember Ivan and Alyosha seeming to float in the space around my bed. This time, however, I’m not sick — yet — nor am I hallucinating.

Like many others, I have been finding my taste in books and movies turning in an apocalyptic direction. I also find myself much less able than usual to hold these made-up stories at a safe distance from myself. That father is me. That child is my 11-year-old son. Their plight penetrates past the “just fiction” shell, forcing me to ask, “Is this what the beginning of that looks like?” I feel panicked. I cannot fall asleep.

Why torture oneself with such books? Why use fiction to imaginatively aggravate our wounds, instead of to soothe them or, failing that, just let them be? One could raise the same question about nonfictional exercises of the imagination: Suppose I contemplate something I did wrong and consequently experience pangs of guilt about it. The philosopher Spinoza thought this kind of activity was a mistake: “Repentance is not a virtue, i.e. it does not arise from reason. Rather, he who repents what he did is twice miserable.”

This sounds crazier than it is. Immersed as we are in a culture of public demands for apology, we should be careful to understand that Spinoza is making a simple claim about psychological economics: There’s no reason to add an additional harm to whatever evils have already taken place. More suffering does not make the world a better place. The mental act of calling up emotions such as guilt and regret — and even simple sadness — is imaginative self-flagellation, and Spinoza urges us to avoid “pain which arises from a man’s contemplation of his own infirmity.”

Should one read apocalypse novels during apocalyptic times? Is there anything to be said for inflicting unnecessary emotional pain on oneself? I think there is.

Link to the rest at The New York Times

PG is reading a post-apocalyptic novel, Ready Player One, right now. (Yes, he realizes he is behind the RPO curve by several years.) He has also started, but not finished, a couple of others in the same genre.

Insensitive lug that he is, PG has not felt any pain from/while reading these books. He finds them helpful to getting his mind around a great many things that people around the world are experiencing, thinking and feeling at the moment.

Out of This World: Shining Light on Black Authors in Every Genre

From Publishers Weekly:

This February saw numerous articles and lists touting the work of award-winning black authors, and works that have quite literally shaped the narrative for black people of the diaspora. We’ll hear names such as Zora Neale Hurston, Maya Angelou, Toni Morrison, Frederick Douglass, and Langston Hughes. Their contributions are and should be forever venerated in the canon of literature.

What we didn’t hear as much about are the writers of genre fiction: thrillers, romance, and in particular, science fiction and fantasy. Why is this relevant? In the last decade, sci-fi and fantasy narratives have taken the media by storm. Marvel has been dominating the box office, Game of Thrones had us glued to our TVs and Twitter feeds (Black Twitter’s #demthrones hashtag in particular had me rolling), and people are making seven-figure salaries playing video games online. It’s a good time to be a nerd. The world is finally coming to appreciate the unique appeal of science fiction and fantasy. It’s wondrous, fun, escapist and whimsical, dazzling and glamorous. It takes the mundane and makes it cool. And for the longest time it’s been very Eurocentric. With sci-fi and fantasy growing exponentially more popular year by year, it’s necessary that, alongside black fiction’s rich history of award-winning literary giants, we also shine the spotlight on black works of speculative fiction.

Narratives like RootsBeloved, and Twelve Years a Slave unflinchingly depict the horrors of slavery.

. . . .

But when the only stories about black people that are given prominence are the ones where black people are abused and oppressed, a very specific and limiting narrative is created for us and about us. And this narrative is one of the means through which the world perceives black people and, worse, through which we perceive ourselves.

It should be noted that black literary fiction does not focus exclusively on black suffering—far from it. The beauty of black literature is that black characters are centered and nuanced, and sci-fi and fantasy narratives can build on that. Through sci-fi and fantasy, we can portray ourselves as mages, bounty hunters, adventurers, and gods. And in the case of sci-fi narratives set in the future, as existing—period.

Sci-fi stories in particular are troubling for their absence of those melanated. Enter Afrofuturism, a term first used in the 1990s in an essay by a white writer named Mark Dery. In a 2019 talk on Afrofuturism at Wellesley College, sci-fi author Samuel R. Delany breaks down what the term meant at the time—essentially fiction set in the future with black characters present. Delany also explains why this is potentially problematic: “[Afrofuturism was] not contingent on the race of the writer, but on the race of the characters portrayed.” 

Link to the rest at Publishers Weekly

PG is reading a fantasy/scifi series that features various types and classes of highly-intelligent bipeds. In the book, some are described, in part, by their skin colors which include brown and black. There are no characters described as having white skin.

However, there is nothing to distinguish the characters with brown or black skins from any of the other characters. There are classes of characters with more magical powers than other classes, but no correlation between those skin color and powers. In fact, blue, pink and green-colored races have a lower power status than the classes that include brown or black skins and those classes with white skin also comprise the lowest strata of society.

PG has not been able to discern any particular messages associated with those characters with brown or black skin. They’re just tossed in hear and there. Personally, he finds nothing objectionable about this practice. This is fantasy, after all, with worlds, people, magic and technology that don’t exist or have any obvious corollaries on the planet earth. If the author had attempted to inject issues pertinent to 21st century earth, it would have seemed out of place and potentially have disrupted the suspension of disbelief that accompanies fantasy, scifi and a variety of other fiction genres.

Worried a Robot Apocalypse Is Imminent?

From The Wall Street Journal:

You Look Like a Thing and I Love You

Elevator Pitch: Ideal for those intrigued and/or mildly unnerved by the increasing role A.I. plays in modern life (and our future), this book is accessible enough to educate you while easing anxieties about the coming robot apocalypse. A surprisingly hilarious read, it presents a view of A.I. that is more “Office Space” than “The Terminator.” Typical insight: A.I. that can’t write a coherent cake recipe is probably not going to take over the world.

Very Brief Excerpt: “For the foreseeable future, the danger will not be that A.I. is too smart but that it’s not smart enough.”

Surprising Factoid: A lot of what we think are social-media bots are almost definitely humans being (poorly) paid to act as a bot. People stealing the jobs of robots: How meta.

. . . .

The Creativity Code

By Marcus du Sautoy

Elevator Pitch: What starts as an exploration of the many strides—and failures—A.I. has made in the realm of artistic expression turns out to be an ambitious meditation on the meaning of creativity and consciousness. It shines in finding humanlike traits in algorithms; one chapter breathlessly documents the matches between Mr. Hassabis’s algorithm and a world champion of Go, a game many scientists said a computer could never win.

Very Brief Excerpt: “Machines might ultimately help us…become less like machines.”

Surprising Factoid: As an example of “overfitting,” the book includes a mathematical model that accidentally predicts the human population will drop to zero by 2028. Probably an error, but better live it up now—just in case.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

A Sci-Fi Author’s Boldest Vision of Climate Change: Surviving It

From The Wall Street Journal:

Kim Stanley Robinson spends his days inventing fictional versions of a future where the climate has changed. In his 2017 novel “New York 2140,” sea levels in the city have risen by 50 feet; boats flit over canals between docks at skyscrapers with watertight basements. In 2005’s “Forty Signs of Rain,” an epochal storm called Tropical Storm Sandy floods most of Washington, D.C. It came out seven years before Superstorm Sandy pummeled New York.

The 67-year-old author of 20 books and winner of both Hugo and Nebula awards for excellence in science-fiction writing, Mr. Robinson is regarded by critics as a leading writer of “climate fiction”—“cli-fi” for short. He considers himself a science-fiction writer, but also says that books set in the future need to take a changing climate into consideration or risk coming across as fantasy.

The term “cli-fi” first appeared around 2011, possibly coined by a blogger named Dan Bloom, and has been a growing niche in science fiction ever since. Books by Margaret Atwood and Barbara Kingsolver are often included in the emerging category. In general, cli-fi steers clear of the space-opera wing of science fiction and tends to be set in a not-too-distant, largely recognizable future.

. . . .

A lot of climate fiction is bleak, but “New York 2140” is kind of utopian. Things work out. What do you think—will the future be dystopian or utopian?

They are both completely possible. It really depends on what we do now and in the next 20 years. I don’t have a prediction to make. Nobody does. The distinguishing feature of right now and the reason that people feel so disoriented and mildly terrified is that it could go really, really badly, into a mass extinction event [for many animal species].

Humans will survive. We are kind of like the seagulls and the ants and the cockroaches and the sharks. It isn’t as if humanity itself is faced with outright extinction, but civilization could crash.

In some sense, [dystopia] is even more plausible. Like, oh, we are all so selfish and stupid, humanity is bound to screw up. But the existence of 8 billion people on a planet at once is a kind of social/technological achievement in cooperation. So, if you focus your attention on that side, you can begin to imagine that the utopian course of history is not completely unlikely.

Venture capitalists and entrepreneurs are in the business of making guesses about the future, as you do in your fiction. How do you create something plausible?

I read the scientific literature at the lay level—science news, the public pages of Nature. I read, I guess you would call it political economy—the works of sociology and anthropology that are trying to study economics and see it as a hierarchical set of power relations. A lot of my reading is academic. I am pretty ignorant in certain areas of popular culture. I don’t pay any attention to social media, and I know that is a big deal, but by staying out of it, I have more time for my own pursuits.

Then what I do is I propose one notion to myself. Say sea level goes up 50 feet. Or in my novel “2312,” say we have inhabited the solar system but we still haven’t solved our [environmental] problems. Or in my new novel, which I am still completing, say we do everything as right as we can in the next 30 years, what would that look like? Once I get these larger project notions, then that is the subject of a novel. It is not really an attempt to predict what will really happen, it is just modeling one scenario.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Conspiracy Theories

Given the nature of the post that will appear immediately adjacent to this one – “The Silurian Hypothesis“, PG discovered that Wikipedia has a page devoted to Conspiracy Theories that could surely contain some of the best writing prompts ever for authors writing in particular genres:

Aviation

Numerous conspiracy theories pertain to air travel and aircraft. Incidents such as the 1955 bombing of the Kashmir Princess, the 1985 Arrow Air Flight 1285 crash, the 1986 Mozambican Tupolev Tu-134 crash, the 1987 Helderberg Disaster, the 1988 bombing of Pan Am Flight 103 and the 1994 Mull of Kintyre helicopter crash as well as various aircraft technologies and alleged sightings, have all spawned theories of foul play which deviate from official verdicts.[3]

Black helicopters

This conspiracy theory emerged in the U.S. in the 1960s. The John Birch Society, who asserted that a United Nations force would soon arrive in black helicopters to bring the U.S. under UN control, originally promoted it.[4] The theory re-emerged in the 1990s, during the presidency of Bill Clinton, and has been promoted by talk show host Glenn Beck.[5][6] A similar theory concerning so-called “phantom helicopters” appeared in the UK in the 1970s.[7]

Chemtrails

Main article: Chemtrail conspiracy theory

A high-flying jet’s engines leaving a condensation trail (contrail)

Also known as SLAP (Secret Large-scale Atmospheric Program), this theory alleges that water condensation trails (“contrails“) from aircraft consist of chemical or biological agents, or contain a supposedly toxic mix of aluminumstrontium and barium,[8] under secret government policies. An estimated 17% of people globally believe the theory to be true or partly true. In 2016, the Carnegie Institution for Science published the first-ever peer-reviewed study of the chemtrail theory; 76 out of 77 participating atmospheric chemists and geochemists stated that they had seen no evidence to support the chemtrail theory, or stated that chemtrail theorists rely on poor sampling.[9][10]

Korean Air Lines Flight 007

The destruction of Korean Air Lines Flight 007 by Soviet jets in 1983 has long drawn the interest of conspiracy theorists. The theories range from allegations of a planned espionage mission, to a US government cover-up, to the consumption of the passengers’ remains by giant crabs.

Malaysia Airlines Flight MH370

The disappearance of Malaysia Airlines Flight 370 in southeast Asia in March 2014 has prompted many theories. One theory suggests that this plane was hidden away and reintroduced as Flight MH17 later the same year in order to be shot down over Ukraine for political purposes. Prolific American conspiracy theorist James H. Fetzer has placed responsibility for the disappearance with Israeli Prime Minister Benjamin Netanyahu.[11] Theories have also related to allegations that a certain autopilot technology was secretly fitted to the aircraft.[12]

Malaysia Airlines Flight MH17

Malaysia Airlines Flight 17 was shot down over Ukraine in July 2014. This event has spawned numerous alternative theories. These variously include allegations that it was secretly Flight MH370, that the plane was actually shot down by the Ukrainian Air Force to frame Russia, that it was part of a conspiracy to conceal the “truth” about HIV (seven disease specialists were on board), or that the Illuminati or Israel was responsible.[11][13]

. . . .

Espionage

Israel animal spying

Conspiracy theories exist alleging that Israel uses animals to conduct espionage or to attack people. These are often associated with conspiracy theories about Zionism. Matters of interest to theorists include a series of shark attacks in Egypt in 2010Hezbollah’s accusations of the use of “spying” eagles,[73] and the 2011 capture of a griffon vulture carrying an Israeli-labeled satellite tracking device.[74]

Harold Wilson

Numerous persons, including former MI5 officer Peter Wright and Soviet defector Anatoliy Golitsyn, have alleged that British Prime Minister Harold Wilson was secretly a KGB spy. Historian Christopher Andrew has lamented that a number of people have been “seduced by Golitsyn’s fantasies”.[75][76][77]

Malala Yousafzai

Conspiracy theories concerning Malala Yousafzai are widespread in Pakistan, elements of which originate from a 2013 satirical piece in Dawn. These theories variously allege that she is a Western spy, or that her attempted murder by the Taliban in 2012 was a secret operation to further discredit the Taliban, and was organized by her father and the CIA and carried out by actor Robert de Niro disguised as an Uzbek homeopath.[78][79][80][81]

Link to the rest at List of Conspiracy Theories – Wikipedia

The Silurian Hypothesis

From The Paris Review:

When I was eleven, we lived in an English Tudor on Bluff Road in Glencoe, Illinois. One day, three strange men (two young, one old) knocked on the door. Their last name was Frank. They said they’d lived in this house before us, not for weeks but decades. For twenty years, this had been their house. They’d grown up here. Though I knew the house was old, it never occurred to me until then that someone else had lived in these rooms, that even my own room was not entirely my own. The youngest of the men, whose room would become mine, showed me the place on a brick wall hidden by ivy where he’d carved his name. “Bobby Frank, 1972.” It had been there all along. And I never even knew it.

That is the condition of the human race: we have woken to life with no idea how we got here, where that is or what happened before. Nor do we think much about it. Not because we are incurious, but because we do not know how much we don’t know.

What is a conspiracy?

It’s a truth that’s been kept from us. It can be a secret but it can also be the answer to a question we’ve not yet asked.

Modern humans have been around for about 200,000 years, but life has existed on this planet for 3.5 billion. That leaves 3,495,888,000 pre-human years unaccounted for—more than enough time for the rise and fall of not one but several pre-human industrial civilizations. Same screen, different show. Same field, different team. An alien race with alien technology, alien vehicles, alien folklore, and alien fears, beneath the familiar sky. There’d be no evidence of such bygone civilizations, built objects and industry lasting no more than a few hundred thousand years. After a few million, with plate tectonics at work, what is on the surface, including the earth itself, will be at the bottom of the sea and the bottom will have become the mountain peaks. The oldest place on the earth’s surface—a stretch of Israel’s Negev Desert—is just over a million years old, nothing on a geological clock.

The result of this is one of my favorite conspiracy theories, though it’s not a conspiracy in the conventional sense, a conspiracy usually being a secret kept by a nefarious elite. In this case, the secret, which belongs to the earth itself, has been kept from all of humanity, which believes it has done the only real thinking and the only real building on this planet, as it once believed the earth was at the center of the universe.

Called the Silurian Hypothesis, the theory was written in 2018 by Gavin Schmidt, a climate modeler at NASA’s Goddard Institute, and Adam Frank, an astrophysicist at the University of Rochester. Schmidt had been studying distant planets for hints of climate change, “hyperthermals,” the sort of quick temperature rises that might indicate the moment a civilization industrialized. It would suggest the presence of a species advanced enough to turn on the lights. Such a jump, perhaps resulting from a release of carbon, might be the only evidence that any race, including our own, will leave behind. Not the pyramids, not the skyscrapers, not Styrofoam, not Shakespeare—in the end, we will be known only by a change in the rock that marked the start of the Anthropocene.

Link to the rest at The Paris Review

William Gibson Builds A Bomb

From National Public Radio:

William Gibson does not write novels, he makes bombs.

Careful, meticulous, clockwork explosives on long timers. Their first lines are their cores — dangerous, unstable reactant mass so packed with story specific detail that every word seems carved out of TNT. The lines that follow are loops of brittle wire wrapped around them.

Once, he made bombs that exploded. Upended genre and convention, exploded expectations. The early ones were messy and violent and lit such gorgeous fires. Now, though, he does something different. Somewhere a couple decades ago, he hit on a plot architecture that worked for him — this weird kind of thing that is all build-up and no boom — and he has stuck to it ever since. Now, William Gibson makes bombs that don’t explode. Bombs that are art objects. Not inert. Still goddamn dangerous. But contained.

You can hear them tick. You don’t even have to listen that close. His language (half Appalachian economy, half leather-jacket poet of neon and decay) is all about friction and the gray spaces where disparate ideas intersect. His game is living in those spaces, checking out the view, telling us about it.

Agency, that’s his newest. It’s a prequel/sequel (requel?) to his last book, The Peripheral, which dealt, concurrently, with a medium-future London after a slow-motion apocalypse called “The Jackpot,” and a near-future now where a bunch of American war veterans, grifters, video game playtesters and a friendly robot were trying to stop an even worse future from occurring. It was a time travel story, but done in a way that only Gibson could: Almost believably, in a way that hewed harshly to its own internal logic, and felt both hopeful and catastrophic at the same time.

Link to the rest at National Public Radio

Ten Things You (Probably) Didn’t Know About C.S. Lewis

From The Millions:

C.S. Lewis gained acclaim as a children’s author for his classic series The Chronicles of Narnia. He also gained acclaim for his popular apologetics, including such works as Mere Christianity and The Screwtape Letters. What is more, he gained acclaim as a science fiction writer for his Ransom Trilogy. Furthermore, he gained acclaim for his scholarly work in Medieval and Renaissance literature with The Allegory of Love and A Preface to Paradise Lost. Many writers have their fleeting moment of fame before their books become yesterday’s child—all the rage and then has-been. Remarkably, Lewis’s books in all of these areas have remained in print for 70, 80, and 90 years. Over the years, the print runs have grown.

. . . .

1. Lewis was not English. He was Irish. Because of his long association with Oxford University, and later with Cambridge, many people assume he was English. When he first went to school in England as a boy, he had a strong Irish accent. Both the students and the headmaster made fun of young Lewis, and he hated the English in turn. It would be many years before he overcame his prejudice against the English.

. . . .

4. Lewis gave away the royalties from his books. Though he had only a modest salary as a tutor at Magdalen College, Lewis set up a charitable trust to give away whatever money he received from his books. Having given away his royalties when he first began this practice, he was startled to learn that the government still expected him to pay taxes on the money he had earned!

5. Lewis never expected to make any money from his books. He was sure they would all be out of print by the time he died. He advised one of his innumerable correspondents that a first edition of The Screwtape Letters would not be worth anything since it would be a used book. He advised not paying more than half the original price. They now sell for over $1200.

6. Lewis was instrumental in Tolkien’s writing of The Lord of the RingsSoon after they became friends in the 1920s, J. R. R. Tolkien began showing Lewis snatches of a massive myth he was creating about Middle Earth. When he finally began writing his “new Hobbit” that became The Lord of the Rings, he suffered from bouts of writer’s block that could last for several years at a time. Lewis provided the encouragement and the prodding that Tolkien needed to get through these dry spells.

. . . .

10. Lewis was very athletic. Even though he hated team sports throughout his life, Lewis was addicted to vigorous exercise. He loved to take 10-, 15-, and 20-mile rapid tromps across countryside, but especially over rugged hills and mountains. He loved to ride a bicycle all over Oxfordshire. He loved to swim in cold streams and ponds. He loved to row a boat. He kept up a vigorous regimen until World War II interrupted his life with all of the new duties and obligations he accepted to do his bit for the war effort.

Link to the rest at The Millions

Sci-Fi Set in the 2020’s Predicted a Dim Decade for Humanity

From BookBub:

Science fiction has always had a complicated relationship with the future; however, sci-fi is all about looking forward to the wondrous things that mankind will achieve — Flying cars! Personal jetpacks! Venusian vacations! After all, a bright and happy future is kind of…boring. Even when you imagine a post-scarcity future like the one in Star Trek, you have to throw in a bit of nuclear holocaust, and the Neutral Zone to spice things up.

Now that we’re firmly entrenched in the 21st century (which for a long time was shorthand for ‛the future’ in sci-fi), it’s fascinating to look at all the stories set in this particular decade to see how past SF masters thought things were going to go. One thing is abundantly clear: No matter how bad you think the decade is going to be, sci-fi writers think the 2020s are going to be worse.

. . . .

The horrifying, dystopian, and extinction-level scenarios imagined in sci-fi set in the 2020s are impressive. There’s the quiet desperation depicted in The Children of Men—the critically-acclaimed, influential, and unexpected sci-fi novel from master crime writer P.D. James—which imagined the last human children being born in 1995, leading to a world swamped by apathy and suicide in 2021. On the other end of the spectrum, you have a 2020 like the one in the film Reign of Fire, where we’re all battling literal dragons in the ashen remnants of society.

. . . .

In-between, you have just about every kind of misery. In Stephen King’s prescient novel The Running Man, 2025 finds the United States on the brink of economic collapse, with desperate citizens driven to appear on deadly reality-TV shows. (Although maybe it doesn’t matter since Ray Bradbury’s classic short story There Will Come Soft Rains tells us that by 2026, the world will be a nuclear blast zone anyway.) The Running Man is one of King’s most underrated novels, weaving themes of economic inequality decades before the issue was mainstream.

. . . .

[A]pocalypse and dystopia are just more fun. What would you rather be doing, flying around the world with a jetpack because everyone is rich and healthy? Or hunting down replicants in a Blade Runner version of Los Angeles that resembles… well, today’s actual Los Angeles if we’re being honest? Here’s another take: Which is more interesting, going to your job every day in a stable if imperfect society? Or firing up the artillery and battling real, actual dragons? The latter, obviously, which is why sci-fi always goes to the dragons, the evil AIs, and violently sociopathic clones, usually accompanied by a society that’s so far gone that no one bothers with things like jobs anymore.

Link to the rest at BookBub

PG is trying out an Amazon embed function to see how it works (or doesn’t) for visitors to THE PASSIVE VOICE.

Does AI judge your personality?

Perhaps a writing prompt. Among many others, PG has always been fascinated by AI books and stories, but this one generates a bit less optimism.

From ArchyW:

AirBnB wants to know if it has a “Machiavellian” personality before renting a house on the beach.

The company may be using software to judge if you are reliable enough to rent a house based on what you post on Facebook, Twitter and Instagram.

They will free the systems on social networks, execute the algorithms and get results. For people at the other end of this process, there will be no transparency in the process, no knowledge, no appeal process.

The company owns a technology patent designed to rate the “personalities” of potential guests by analyzing their activity on social networks to decide if they are a risky guest that could damage a host’s house.

The final product of its technology is to assign each AirBnB guest customer a “reliability score”. According to reports, this will be based not only on social media activity, but also on other data found online, including blog posts and legal records.

The technology was developed by Trooly, which AirBnB acquired three years ago. Trooly created a tool based on artificial intelligence designed to “predict reliable relationships and interactions,” and that uses social networks as a data source.

The software builds the score based on perceived “personality traits” identified by the software, including some that you could predict – awareness, openness, extraversion, kindness – and some strangers – “narcissism” and “Machiavellianism,” for example. (Interestingly, the software also seeks to get involved in civil litigation, suggesting that now or in the future they can ban people based on the prediction that they are more likely to sue.)

AirBnB has not said whether they use the software or not.

If you are surprised, shocked or unhappy with this news, then it is like most people who are unaware of the enormous and rapidly growing practice of judging the people (clients, citizens, employees and students) who use AI applied to networks social. exercise.

AirBnB is not the only organization that scans social networks to judge personality or predict behavior. Others include the Department of Homeland Security, employers, school districts, police departments, the CIA, insurance companies and many others.

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

There is only one problem.

AI-based social media monitoring is not that smart

. . . .

The question is not whether the AI ​​applied to data collection works. It surely does. The question is whether social networks reveal truths about users. I am questioning the quality of the data.

For example, scanning someone’s Instagram account can “reveal” that it is fabulously rich and travels the world enjoying champagne and caviar. The truth may be that they are broken and stressed influential people who exchange social exposure for hotel rooms and meals in restaurants where they take highly manipulated photos created exclusively to build reputation. Some people use social networks to deliberately create a deliberately false image of themselves.

A Twitter account can show a user as a prominent, constructive and productive member of society, but a second anonymous account unknown to social media monitoring systems would have revealed that person as a sociopathic troll who just wants to see the fire burn. world. People have multiple social media accounts for different aspects of their personalities. And some of them are anonymous.

. . . .

For example, using profanity online can reduce a person’s reliability score, based on the assumption that rude language indicates a lack of ethics or morality. But recent research suggests the opposite: people with their mouths in the bathroom may, on average, be more reliable, as well as more intelligent, more honest and more capable, professionally. Do we trust that Silicon Valley software companies know or care about the subtleties and complexities of human personality?

. . . .

There is also a generational division. Younger people are statistically less likely to publish in public, preferring private messaging and social interaction in small groups. Is AI-based social media monitoring fundamentally ageist?

Women are more likely than men to post personal information on social networks (information about oneself), while men are more likely than women to post impersonal information. Posting about personal matters can be more revealing about personality. Is social media monitoring based on AI fundamentally sexist?

Link to the rest at ArchyW

How William Gibson Keeps His Science Fiction Real

From The New Yorker:

Suppose you’ve been asked to write a science-fiction story. You might start by contemplating the future. You could research anticipated developments in science, technology, and society and ask how they will play out. Telepresence, mind-uploading, an aging population: an elderly couple live far from their daughter and grandchildren; one day, the pair knock on her door as robots. They’ve uploaded their minds to a cloud-based data bank and can now visit telepresently, forever. A philosophical question arises: What is a family when it never ends? A story flowers where prospective trends meet.

This method is quite common in science fiction. It’s not the one employed by William Gibson, the writer who, for four decades, has imagined the near future more convincingly than anyone else. Gibson doesn’t have a name for his method; he knows only that it isn’t about prediction. It proceeds, instead, from a deep engagement with the present. When Gibson was starting to write, in the late nineteen-seventies, he watched kids playing games in video arcades and noticed how they ducked and twisted, as though they were on the other side of the screen. The Sony Walkman had just been introduced, so he bought one; he lived in Vancouver, and when he explored the city at night, listening to Joy Division, he felt as though the music were being transmitted directly into his brain, where it could merge with his perceptions of skyscrapers and slums. His wife, Deborah, was a graduate student in linguistics who taught E.S.L. He listened to her young Japanese students talk about Vancouver as though it were a backwater; Tokyo must really be something, he thought. He remembered a weeping ambulance driver in a bar, saying, “She flatlined.” On a legal pad, Gibson tried inventing words to describe the space behind the screen; he crossed out “infospace” and “dataspace” before coming up with “cyberspace.” He didn’t know what it might be, but it sounded cool, like something a person might explore even though it was dangerous.

Gibson first used the word “cyberspace” in 1981, in a short story called “Burning Chrome.” He worked out the idea more fully in his first novel, “Neuromancer,” published in 1984, when he was thirty-six. Set in the mid-twenty-first century, “Neuromancer” follows a heist that unfolds partly in physical space and partly in “the matrix”—an online realm. “The matrix has its roots in primitive arcade games,” the novel explains, “in early graphics programs and military experimentation with cranial jacks.” By “jacking in” to the matrix, a “console cowboy” can use his “deck” to enter a new world:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation. . . . A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.

. . . .

Most science fiction takes place in a world in which “the future” has definitively arrived; the locomotive filmed by the Lumière brothers has finally burst through the screen. But in “Neuromancer” there was only a continuous arrival—an ongoing, alarming present. “Things aren’t different. Things are things,” an A.I. reports, after achieving a new level of consciousness. “You can’t let the little pricks generation-gap you,” one protagonist tells another, after an unnerving encounter with a teen-ager. In its uncertain sense of temporality—are we living in the future, or not?—“Neuromancer” was science fiction for the modern age. The novel’s influence has increased with time, establishing Gibson as an authority on the world to come.

Link to the rest at The New Yorker

Rare Harry Potter book sells for £50,000 after being kept for decades in code-locked briefcase

From Birmingham Live:

A rare first-edition of Harry Potter which was kept in pristine condition in a code-locked briefcase for decades has fetched a magic £50,000 at auction.

The hardback book is one of just 500 original copies of Harry Potter and the Philosopher’s Stone released in 1997 when JK Rowling was relatively unknown.

Its careful owners had kept the book safely stored away in a briefcase at their home, which they unlocked with a code, in order to preserve the treasured family heirloom.

Book experts had said the novel was in the best condition they have ever seen and estimated it could fetch £25,000 to £30,000 when it went under the hammer.

But the novel smashed its auction estimate when it was bought by a private UK buyer for a total price of £57,040 following a bidding war today (Thurs).

. . . .

Jim Spencer, books expert at Hansons, said: “I’m absolutely thrilled the book did so well – it deserved to.

“I couldn’t believe the condition of it – almost like the day it was made. I can’t imagine a better copy can be found.

“A 1997 first edition hardback of Harry Potter and the Philosopher’s Stone is the holy grail for collectors as so few were printed.

“The owners took such great care of their precious cargo they brought it to me in a briefcase, which they unlocked with a secret code.

“It felt like we were dealing in smuggled diamonds.”

Link to the rest at Birmingham Live

Why J.K. Rowling Should Walk Away From Harry Potter Forever

Note: This post is a few years old, but PG thinks it might be useful for authors writing in a variety of genres.

From The Legal Artist:

The other day, J.K. Rowling gave an interview with Matt Lauer about her charity Lumos and mentioned she probably wouldn’t write another story about Harry and the gang, although she wouldn’t foreclose the opportunity altogether. I don’t know whether Rowling will ever return to Harry Potter but I do know that she shouldn’t. In fact, I think she should relinquish all rights to the Potterverse before she messes it all up.

Okay what? Messes it up? J.K. Rowling is a goddamn international treasure and I should be strung up by the neck for thinking such heretical thoughts, right? Well maybe, but first let me say that I have nothing but admiration for Rowling’s skill and artistry. The books and films stand as towering achievements in their respective fields and the world is undoubtedly a better place with Harry Potter than it would be without. And that’s exactly the problem.

We revere authors and creators of valuable intellectual property. We assume they know what’s best when it comes to their work. And sometimes that’s true! George R.R. Martin certainly believes it. The general sentiment is that his voice is the only one worthy of steering the Game of Thrones ship. The same probably would have been said about J.R.R. Tolkien and Sir Arthur Conan Doyle. But as fans, I think we’ve been burned by too many Special Editions/ Director’s Cuts/ sequels/ prequels/ sidequels/ reboots/ and preboots to feel anything but trepidation when a creator remains involved for too long with their own work. I get it. It’s your baby, and it’s hard to walk away from something that you poured your heart and soul into. But I’m a firm believer in the Death of the Author, and I’ve stated on this blog several times that when a work takes on a certain level of cultural importance, it transcends the law and becomes the property of society at large, not just the creator. That was the original intention when copyright protections were baked into the Constitution. Remember too that history is replete with authors who aren’t the best judges of their own work; George Lucas is a prime example of how far from grace one can fall simply by sticking around for too long. And I want Rowling to avoid that fate.

. . . .

Obviously the law allows Rowling to do whatever she wants. Copyright law, particularly in the U.S., isn’t equipped to consider the cultural importance of works like Star Wars or Harry Potter. The result is that all art, regardless of quality, is treated the same, which can be a good thing because it prevents systemic discrimination. The downside to that approach is that financial reward becomes the only measure of success. And that just makes it harder to let go. It’s easy to convince yourself that you and only you are capable of maintaining the integrity of the work over the long haul. It becomes even easier if there’s a lot of money to be made by doing it. The law incentivizes you to stay. And because copyright terms last for so long (life of the author plus 70 years), Rowling’s great great grandchildren will be able to profit from her work.  And I think it’s a shame to keep something like that so closed-source.

To my eyes, the seams are already showing. Three years ago, Rowling publicly stated that she wished she had killed Ron out of spite and that Hermione really should’ve ended up with Harry. The fact that she admitted this publicly is problematic enough – it shows a tone-deafness to the effect her words have on the fan-base (which is surprising considering her generosity to her fans). It also suggests that she might not have a full grasp of what makes the story work (i.e. that Harry’s arc isn’t about romance).

Link to the rest at The Legal Artist

When Bots Teach Themselves to Cheat

From Wired:

Once upon a time, a bot deep in a game of tic-tac-toe figured out that making improbable moves caused its bot opponent to crash. Smart. Also sassy.

Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet’s DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments.

The specimens collected by Krakovna and fellow bug hunters point to a communication problem between humans and machines: Given a clear goal, an algorithm can master complex tasks, such as beating a world champion at Go. But even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.

Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs.

These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout.

“Seeing these systems be creative and do things you never thought of, you recognize their power and danger,” says Jeff Clune, a researcher at Uber’s AI lab. A recent paper that Clune coauthored, which lists 27 examples of algorithms doing unintended things, suggests future engineers will have to collaborate with, not command, their creations. “Your job is to coach the system,” he says. Embracing flashes of artificial creativity may be the solution to containing them.

. . . .

  • Infanticide: In a survival simulation, one AI species evolved to subsist on a diet of its own children.

. . . .

  • Optical Illusion: Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.

Link to the rest at Wired

New Documentary Focuses on Ursula K. Le Guin

From The Wall Street Journal:

“Worlds of Ursula K. Le Guin” is the first documentary about the pioneering science-fiction writer—and pretty much the first film of any kind to showcase her work. Although Ms. Le Guin was writing about dragons and wizard schools back in 1968 for her Earthsea series, there have been no high-profile movies based on her 20 novels or more than 100 short stories.

“I don’t think Harry Potter would have existed without Earthsea existing,” author Neil Gaiman says in the documentary, which premieres Friday on PBS. Ms. Le Guin’s Earthsea cycle, a young-adult series about a sprawling archipelago of island kingdoms, included five novels and many stories written between 1968 and 2001.

Other writers who discuss Ms. Le Guin’s work and influence in the film include Margaret Atwood (“The Handmaid’s Tale”), David Mitchell (“Cloud Atlas”) and Michael Chabon (“The Amazing Adventures of Kavalier & Clay”).

“I think she’s one of the greatest writers that the 20th-century American literary scene produced,” Mr. Chabon says.

. . . .

“I never wanted to be a writer—I just wrote,” she says in the film. Believing science fiction should be less about predicting the future than observing the present, she invented fantastical worlds that were their own kind of anthropology, exploring how societies work.

In her 1969 novel “The Left Hand of Darkness,” she introduces a genderless race of beings who are sexually active once a month, either as a man or woman—but don’t know which it will be. Her 1973 short story, “The Ones Who Walk Away From Omelas,” introduces a utopian city where everyone is happy. But readers learn that this blissful world is entirely dependent on one child being imprisoned in a basement and mistreated. The joy of all the people hinges on the child being forced to suffer, and everyone knows it. The author had been horrified to learn through her father’s research about the slaughter of native tribes that made modern California possible.

. . . .

As a female sci-fi writer, “my species was once believed to be mythological, like the tribble and the unicorn,” Ms. Le Guin said in an address before the 1975 Worldcon science-fiction convention in Melbourne, Australia. Her work was called feminist sci-fi, but she grew into that label awkwardly. “There was a considerable feeling that we needed to cut loose from marriage, from men, and from motherhood. And there was no way I was gonna do that,” she said. “Of course I can write novels with one hand and bring up three kids with the other. Yeah, sure. Watch me.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

 

‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos

Perhaps a writing prompt.

From The Wall Street Journal:

Startup companies, government agencies and academics are racing to combat so-called deepfakes, amid fears that doctored videos and photographs will be used to sow discord ahead of next year’s U.S. presidential election.

It is a difficult problem to solve because the technology needed to manipulate images is advancing rapidly and getting easier to use, according to experts. And the threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters, leaving companies that run those platforms unsure how to handle the issue.

“While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly. I’d predict we see visually undetectable deepfakes in less than 12 months,” said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology. “Society is going to start distrusting every piece of content they see.”

Truepic is working with Qualcomm Inc. —the biggest supplier of chips for mobile phones—to add its technology to the hardware of cellphones. The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. Truepic also offers a free app consumers can use to take verified pictures on their smartphones.

. . . .

When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites. The company says it has partnerships with insurance companies that use the technology to help verify damage claims, though it declined to name the firms.

The U.S. Defense Department, meanwhile, is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

How Stanley Kubrick Staged the Moon Landing

From The Paris Review:

Have you ever met a person who’s been on the moon? There are only four of them left. Within a decade or so, the last will be dead and that astonishing feat will pass from living memory into history, which, sooner or later, is always questioned and turned into fable. It will not be exactly like the moment the last conquistador died, but will lean in that direction. The story of the moon landing will become a little harder to believe.

I’ve met three of the twelve men who walked on the moon. They had one important thing in common when I looked into their eyes: they were all bonkers. Buzz Aldrin, who was the second off the ladder during the first landing on July 20, 1969, almost exactly fifty years ago—he must have stared with envy at Neil Armstrong’s crinkly space-suit ass all the way down—has run hot from the moment he returned to earth. When questioned about the reality of the landing—he was asked to swear to it on a Bible—he slugged the questioner. When I sat down with Edgar Mitchell, who made his landing in the winter of 1971, he had that same look in his eyes. I asked about the space program, but he talked only about UFOs. He said he’d been wrapped in a warm consciousness his entire time in space. Many astronauts came back with a belief in alien life.

Maybe it was simply the truth: maybe they had been touched by something. Or maybe the experience of going to the moon—standing and walking and driving that buggy and hitting that weightless golf ball—would make anyone crazy. It’s a radical shift in perspective, to see the earth from the outside, fragile and small, a rock in a sea of nothing. It wasn’t just the astronauts: everyone who saw the images and watched the broadcast got a little dizzy.

July 20 1969, 3:17 P.M. E.S.T. The moment is an unacknowledged hinge in human history, unacknowledged because it seemed to lead nowhere. Where are the moon hotels and moon amusement parks and moon shuttles we grew up expecting? But it did lead to something: a new kind of mind. It’s not the birth of the space age we should be acknowledging on this fiftieth anniversary, but the birth of the paranoia that defines us. Because a man on the moon was too fantastic to accept, some people just didn’t accept it, or deal with its implications—that sea of darkness. Instead, they tried to prove it never happened, convince themselves it had all been faked. Having learned the habit of conspiracy spotting, these same people came to question everything else, too. History itself began to read like a fraud, a book filled with lies.

. . . .

The stories of a hoax predate the landing itself. As soon as the first capsules were in orbit, some began to dismiss the images as phony and the testimony of the astronauts as bullshit. The motivation seemed obvious: John F. Kennedy had promised to send a man to the moon within the decade. And, though we might be years behind the Soviets in rocketry, we were years ahead in filmmaking. If we couldn’t beat them to moon, we could at least make it look like we had.

Most of the theories originated in the cortex of a single man: William Kaysing, who’d worked as a technical writer for Rocketdyne, a company that made engines. Kaysing left Rocketdyne in 1963, but remained fixated on the space program and its goal, which was often expressed as an item on a Cold War to-do list—go to the  moon: check—but was in fact profound, powerful, surreal. A man on the moon would mean the dawn of a new era. Kaysing believed it unattainable, beyond the reach of existing technology. He cited his experience at Rocketdyne, but, one could say he did not believe it simply because it was not believable. That’s the lens he brought to every NASAupdate. He was not watching for what had happened, but trying to figure out how it had been staged.

There were six successful manned missions to the moon, all part of Apollo. A dozen men walked the lunar surface between 1969 and 1972, when Harrison H. Schmitt—he later served as a Republican U.S. Senator from New Mexico—piloted the last lander off the surface. When people dismiss the project as a failure—we never went back because there is nothing for us there—others point out the fact that twenty-seven years passed between Columbus’s first Atlantic crossing and Cortez’s conquest of Mexico, or that 127 years passed between the first European visit to the Mississippi River and the second—it’d been “discovered,” “forgotten,” and “discovered” again. From some point in the future, our time, with its celebrities, politicians, its happiness and pain, might look like little more than an interregnum, the moment between the first landing and the colonization of space.

. . . .

Kaysing catalogued inconsistencies that “proved” the landing had been faked. There have been hundreds of movies, books, and articles that question the Apollo missions; almost all of them have relied on Kaysing’s “discoveries.”

  1. Old Glory: The American flag the astronauts planted on the moon, which should have been flaccid, the moon existing in a vacuum, is taut in photos, even waving, reveling more than NASA intended. (Knowing the flag would be flaccid, and believing a flaccid flag was no way to declare victory, engineers fitted the pole with a cross beam on which to hang the flag; if it looks like its waving, that’s because Buzz Aldrin was twisting the pole, screwing it into the lunar soil).
  2. There’s only one source of light on the moon—the sun—yet the shadows of the astronauts fall every which way, suggesting multiple light sources, just the sort you might find in a movie studio. (There were indeed multiple sources of light during the landings—it came from the sun, it came from the earth, it came from the lander, and it came from the astronauts’ space suits.)
  3. Blast Circle: If NASA had actually landed a craft on the moon, it would have left an impression and markings where the jets fired during takeoff. Yet, as can be seen in NASA’s own photos, there are none. You know what would’ve left no impression? A movie prop. Conspiracy theorists point out what looks like a C written on one of the moon rocks, as if it came straight from the special effects department. (The moon has about one-fifth the gravity of earth; the landing was therefore soft; the lander drifted down like a leaf. Nor was much propulsion needed to send the lander back into orbit. It left no impression just as you leave no impression when you touch the bottom of a pool; what looks like a C is probably a shadow.)
  4. Here you are, supposedly in outer space, yet we see no stars in the pictures. You know where else you wouldn’t see stars? A movie set. (The moon walks were made during the lunar morning—Columbus went ashore in daylight, too. You don’t see stars when the sun is out, nor at night in a light-filled place, like a stadium or a landing zone).
  5. Giant Leap for Mankind: If Neil Armstrong was the first man on the moon, then who was filming him go down the ladder? (A camera had been mounted to the side of the lunar module).

Kaysing’s alternate theory was elaborate. He believed the astronauts had been removed from the ship moments before takeoff, flown to Nevada, where, a few days later, they broadcast the moon walk from the desert. People claimed to have seen Armstrong walking through a hotel lobby, a show girl on each arm. Aldrin was playing the slots. They were then flown to Hawaii and put back inside the capsule after the splash down but before the cameras arrived.

. . . .

Of all the fables that have grown up around the moon landing, my favorite is the one about Stanley Kubrick, because it demonstrates the use of a good counternarrative. It seemingly came from nowhere, or gave birth to itself simply because it made sense. (Finding the source of such a story is like finding the source of a joke you’ve been hearing your entire life.) It started with a simple question: Who, in 1969, would have been capable of staging a believable moon landing?

Kubrick’s masterpiece, 2001: A Space Odyssey, had been released the year before. He’d plotted it with the science fiction master Arthur C. Clarke, who is probably more responsible for the look of our world, smooth as a screen, than any scientist. The manmade satellite, GPS, the smart phone, the space station: he predicted, they built. 2001 picked up an idea Clarke had explored in his earlier work, particularly his novel Childhood’s End—the fading of the human race, its transition from the swamp planet to the star-spangled depths of deep space. In 2001, change comes in the form of a monolith, a featureless black shard that an alien intelligence—you can call it God—parked on an antediluvian plain. Its presence remakes a tribe of apes, turning them into world-exploring, tool-building killers who will not stop until they find their creator, the monolith, buried on the dark side of the moon. But the plot is not what viewers, many of them stoned, took from 2001. It was the special effects that lingered, all that technology, which was no less than a vision, Ezekiel-like in its clarity, of the future. Orwell had seen the future as bleak and authoritarian; Huxley had seen it as a drug-induced dystopia. In the minds Kubrick and Clarke, it shimmered, luminous, mechanical, and cold.

Most striking was the scene set on the moon, in which a group of astronauts, posthuman in their suits, descend into an excavation where, once again, the human race comes into contact with the monolith. Though shot in a studio, it looks more real than the actual landings.

Link to the rest at The Paris Review

.

The Debate over De-Identified Data: When Anonymity Isn’t Assured

Not necessarily about writing or publishing, but an interesting 21st-century issue.

From Legal Tech News

As more algorithm-coded technology comes to market, the debate over how individuals’ de-identified data is being used continues to grow.

A class action lawsuit filed in a Chicago federal court last month highlights the use of sensitive de-identified data for commercial means. Plaintiffs represented by law firm Edelson allege the University of Chicago Medical Center gave Google the electronic health records (EHR) of nearly all of its patients from 2009 to 2016, with which Google would create products. The EHR, which is a digital version of a patient’s paper chart, includes a patient’s height, weight, vital signs and medical procedure and illness history.

While the hospital asserted it did de-identify data, Edelson claims the hospital included date and time stamps and “copious” free-text medical notes that, combined with Google’s other massive troves of data, could easily identify patients, in noncompliance with the Health Insurance Portability and Accountability Act (HIPAA).

. . . .

“I think the biggest concern is the quantity of information Google has about individuals and its ability to reidentify information, and this gray area of if HIPAA permits it if it was fully de-identified,” said Fox Rothschild partner Elizabeth Litten.

Litten noted that transferring such data to Google, which has a host of information collected from other services, makes labeling data “de-identified” risky in that instance. “I would want to be very careful with who I share my de-identified data with, [or] share information with someone that doesn’t have access to a lot of information. Or [ensure] in the near future the data isn’t accessed by a bigger company and made identifiable in the future,” she explained.

If the data can be reidentified, it may also fall under the scope of the European Union’s General Data Protection Regulation (GDPR) or California’s upcoming data privacy law, noted Cogent Law Group associate Miles Vaughn.

Link to the rest at Legal Tech News

De-identified data is presently an important component in the development of artificial intelligence systems.

As PG understands it, a large mass of data concerning almost anything, but certainly including data about human behavior, is dumped into a powerful computer which is tasked with discerning patterns and relationships within the data.

The more data regarding individuals that goes into the AI hopper, the more can be learned about groups of individuals and relationships between individuals or behavior patterns of individuals that may not be generally known or discoverable by other, more traditional methods of data analysis and the resultant learning such analysis generates.

As a crude example based upon the brief description in the OP, an artificially intelligent system that had access to the medical records described in the OP and also the usage records for individuals using Ventra cards (contactless digital payment cards that are electronically scanned) on the Chicago Transit Authority could conceivably identify a specific individual associated with an anonymous medical record by correlating Ventra card use at a nearby transit stop with the time stamps on the digital medical record entries.

Everyone Wants to Be the Next ‘Game of Thrones’

From The Wall Street Journal:

Who will survive the Game of Clones?

The hunt is on for the next epic fantasy to fill the void left by the end of “Game of Thrones,”the HBO hit that averaged 45 million viewers per episode in its last season. In television, film and books, series that build elaborate worlds the same way the medieval-supernatural saga did are in high demand.

“There’s a little bit of a gold-rush mentality coming off the success of ‘Game of Thrones,’” says Marc Guggenheim, an executive producer of “Carnival Row,” a series with mythological creatures that arrives on Amazon Prime Video in August. “Everyone wants to tap into that audience.”

There’s no guarantee anyone will be able to replicate the success of “Thrones.” Entertainment is littered with copycats of other hits that fell flat. But the market is potentially large and lucrative. So studios are pouring millions into new shows, agents are brokering screen deals around book series that can’t get written fast enough and experts are readying movie-level visual effects for epic storytelling aimed at the couch.

. . . .

Literary agent Joanna Volpe represents three fantasy authors whose books now are being adapted for the screen. “‘Game of Thrones’ opened a door—it made studios hungrier for material like this,” she says. A decade ago, she adds, publishing and TV weren’t interested in fantasy for adults because only the rare breakout hit reached beyond the high-nerd niche.

. . . .

HBO doesn’t release demographic data on viewers, though cultural gatekeepers say they barely need it. “You know what type of audience you’re getting: It’s premium TV, it’s educated, it’s an audience you want to tap into,” says Kaitlin Harri, senior marketing director at publisher William Morrow. By the end of the series, the audience had broadened to include buzz seekers of all kinds with little interest in fantasy.

The show based on the books by George R.R. Martin ended its eight-year run in May, but it remains in the muscle memory of many die-hard fans. “I still look forward to Sunday nights thinking that at 9 o’clock I’m going to get a new episode,” says Samantha Ecker, a 35-year-old writer for “Watchers on the Wall,” which is still an active fan site. The memorabilia collector continues to covet all things “Throne.” Last week, she got a $15 figurine of Daenerys Targaryen sitting on the Iron Throne “since they didn’t let her do it in the show.”

. . . .

“Game of Thrones” has helped ring in a new era in fantasy writing, with heightened interest in powerful female characters. Authors generating excitement include R.F. Kuang, who soon releases “The Dragon Republic,” part of a fantasy series infused with Chinese history, and S.A. Chakraborty, whose Islamic-influenced series includes “The Kingdom of Copper,” out earlier this year.

For its fantasies featuring power struggles that might appeal to “Thrones” fans, Harper Voyager uses marketing trigger words like “politics,” “palace intrigue” and “succession,” says David Pomerico, editorial director of the imprint of HarperCollins, which like The Wall Street Journal is owned by News Corp.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)
.

Are Colleges Friendly to Fantasy Writers? It’s Complicated

From Wired:

In an increasingly competitive publishing environment, more and more fantasy and science fiction writers are going back to school to get an MFA in creative writing. College writing classes have traditionally been hostile to fantasy and sci-fi, but author Chandler Klang Smith says that’s no longer the case.

“I definitely don’t think the landscape out there is hostile toward speculative writing,” Smith says in Episode 365 of the Geek’s Guide to the Galaxy podcast. “If anything I think it’s seen as being kind of exciting and sexy and new, which is something these programs want.”

But science fiction author John Kessel, who helped found the creative writing MFA program at North Carolina State University, says it really depends on the type of speculative fiction. Slipstream and magical realism may have acquired a certain cachet, but epic fantasy and space opera definitely haven’t.

“The more it seems like traditional science fiction, the less comfortable programs will be with it,” he says. “Basically if the story is set in the present and has some really odd thing in it, then I think you won’t raise as many eyebrows. But I think that traditional science fiction—anything that resembles Star Wars or Star Trek, or even Philip K. Dick—I think some places would look a little sideways at it.”

That uncertainty can put aspiring fantasy and science fiction writers in a tough spot, as writer Steph Grossmandiscovered when she was applying to MFA programs. “As an applicant—and even though I did a ton of research—it’s really hard to find which schools are going to be accepting of it and which schools aren’t,” she says. “The majority of them will be accepting of some aspect of it—especially if you’re writing things in the slipstream genre—but besides Sarah Lawrence College and Stonecoast, when I was looking, most of the schools don’t really touch on whether they’re accepting of it or not.”

Geek’s Guide to the Galaxy host David Barr Kirtley warns that writing fantasy and science fiction requires specialized skills and knowledge that most MFA programs simply aren’t equipped to teach.

“I would say that if you’re writing epic fantasy or sword and sorcery or space opera and things like that, I think you’d probably be much happier going to Clarion or Odyssey, these six week summer workshops where you’re going to be surrounded by more hardcore science fiction and fantasy fans,” he says. “And definitely do your research. Don’t just apply to your local MFA program and expect that you’re going to get helpful feedback on work like that.”

Link to the rest at Wired

The Land Where the Internet Ends

From The New York Times:

A few weeks ago, I drove down a back road in West Virginia and into a parallel reality. Sometime after I passed Spruce Mountain, my phone lost service — and I knew it would remain comatose for the next few days. When I spun the dial on the car radio, static roared out of every channel. I had entered the National Radio Quiet Zone, 13,000 square miles of mountainous terrain with few cell towers or other transmitters.

I was headed toward Green Bank, a town that adheres to the strictest ban on technology in the United States. The residents do without not only cellphones but also Wi-Fi, microwave ovens and any other devices that generate electromagnetic signals.

The ban exists to protect the Green Bank Observatory, a cluster of radio telescopes in a mountain valley. Conventional telescopes are like superpowered eyes. The instruments at Green Bank are more like superhuman ears — they can tune into frequencies from the lowest to the highest ends of the spectrum. The telescopes are powerful enough to detect the death throes of a star, but also terribly vulnerable to our loud world. Even a short-circuiting electric toothbrush could blot out the whisper of the Big Bang.

Physicists travel here to measure gravitational waves. Astronomers study stardust. The observatory has also become a hub for alien hunters who hope to detect messages sent from other planets. And in the past decade, the town has become a destination for “electrosensitives” who believe they’re allergic to cellphone towers — some of them going so far as to wrap their bedrooms in mesh in hopes of screening out what they believe to be harmful rays.

. . . .

In theory, I could achieve this kind of freedom anywhere by shutting off my cellphone and observing an “internet sabbath.” But that has never worked for me — and I suspect it doesn’t for most other people either. Turn off your phone and you can almost hear it wheedling to be turned on again.

To experience the deepest solitude, you need to enter the land where the internet ends.

. . . .

I wanted to find out what it was like to disconnect in the quietest town in America, so here I was, hiking down a dirt road behind the Green Bank observatory campus. I wandered through a meadow and into an abandoned playground. The rusted swings creaked in the wind.

. . . .

In the distance, the largest of the Green Bank telescopes reared up over a hill like a shimmering apparition, with its lacy struts and moon-white dish. The telescope is so freakishly huge that it looked completely unreal, as if it had been C.G.I.-ed into the sky.

But the quiet was even eerier. Not just radio quiet, but the kind of silence that I hadn’t heard in years: no buzz of the highway, no planes overhead, just the rush of wind through the grass. That — along with the lonely playground — made me feel as if I had stumbled onto the set of an apocalyptic TV series.

. . . .

I peppered Mr. McNally with questions. Did he own a cellphone? He told me he never had. But, he said, lately whenever he ventures outside of the quiet zone, “people tell me you have to get one.” Recently, at a hardware store a hundred miles from here, he tried to pay with a credit card that he hadn’t used in years. That must have tripped some security alert, because the store clerks said that they needed to verify his identify by calling the phone number listed on his account. “They wanted to call me to make sure that it was really me,” Mr. McNally said. He tried to explain that his phone wasn’t in his pocket. It was back in Green Bank, because it was a landline. The clerks couldn’t seem to grasp this.

. . . .

At twilight, I parked near a long, low laboratory building and walked through the gates of the observatory, beyond which no gas-powered cars are allowed (because spark plugs). I passed the row of telescopes and found a dirt path into the woods. The darkness dropped, and the outlines of my body disappeared. Baby frogs — peepers — chirped and creaked, filling the air with their own static. Deer crashed around the brush or scooted across the path in front of me, invisible in the dark but for their white tails.

My fingers twitched for the cellphone that wasn’t there. And then I remembered a moment years ago, maybe in 2011 or 2012, when I first switched from a “dumb phone” to a smartphone and brought the internet with me into the woods.

Link to the rest at The New York Times

Terraforming Ourselves

From The American Interest:

In 1903, the aging Jules Verne—famed French author of the 54 adventure novels in the Voyages extraordinaires series—was asked to compare his body of work to that of his upstart English competitor, H.G. Wells. Verne, who prided himself on the strict scientific accuracy of his tales of exploration and discovery, found the question offensive. “No, there is no rapport between his work and mine,” Verne snapped. “I make use of physics. He invents.” Verne cited his From the Earth to the Moon, which featured characters travelling to the Moon in an aluminum bullet fired from a giant cannon, contrasting it with Wells’s The First Men in the Moon, in which the lunar-bound spaceship is made of gravity-defying “cavorite.” Verne had based his space cannon on the latest technological discoveries of the time, even doing rough calculations on the necessary dimensions of the muzzle. He explained in an interview:

I go to the moon in a cannonball discharged from a cannon. Here there is no invention. He goes to Mars [sic] in an airship which he constructs of a metal which does away with the law of gravitation. Ça c’est très joli…. But show me this metal. Let him produce it.

In this put-down of one of the “Fathers of Science Fiction” by another, we see the future of the field. Long before anyone coined the terms “hard sci-fi” and “soft sci-fi” or used them as badges of pride or disparaging slurs, long before the “holy war” between old school pulp and the ’60s era New Wave, we have this demand from the cranky old school to the squishy new school: “Show me this metal.” Wells, whose social activism permeated his fiction, would no doubt claim that Verne was rather missing the point. But what becomes clear from a survey of science fiction’s history is that, if there’s one thing these authors love more than cosmic wonder and terror, it’s petty fights about what constitutes “real” science fiction.

Not, of course, that these science fiction fights aren’t proxies for fights about science or society itself. Science Fiction: A Literary History, recently published by the British Library and edited by Roger Luckhurst, chooses to forego defining the genre in order to discuss the sociopolitical stakes behind some of those “Whose Science? Which Fiction?” debates. Each of its contributors seems to have his or her own position on that definitional question, anyway. The eight chapters by different sci-fi scholars cover topics from “The Beginning, Early Forms of Science Fiction” to “New Paradigms, After 2001.”

. . . .

The best definitions of science fiction are evocative rather than exhaustive. Ray Bradbury, in the introduction to the 1974 collection Science Fact/Fiction, wrote, “Science fiction then is the fiction of revolutions. Revolutions in time, space, medicine, travel, and thought. . . . Above all, science fiction is the fiction of warm-blooded human men and women sometimes elevated and sometimes crushed by their machines.” Bradbury is onto something here: Revolutionary change, often but not exclusively technological, is one of the most vital subjects for science fiction. Confronting that change might be the core of the story, as in first-contact narratives from Wells’s War of the Worlds to Ted Chiang’s “Story of Your Life” (the basis of the film Arrival)Or the revolution might have occurred in the narrative’s past, with the story examining how and if people can live in their brave new world. This is often the set-up for novels of utopia and dystopia.

One of the most interesting things Science Fiction: A Literary History reveals is how difficult it is to write utopias. Surely the point of the exercise is to paint a picture of a world readers might want to live in. And yet for every author’s utopia, there’s a coterminous dystopia for the reader with eyes to see. H.G. Wells painted a parallel world called Utopia in Men Like Gods, in which enlightened and technologically advanced humans live in harmony with one another and the natural world, whose climate they have adjusted to a uniform Mediterranean tranquility. The Utopians are intrigued to discover our Earth, in a sister universe “a little retarded in time” compared to theirs. Utopia’s many advances include a eugenics program, for Utopian science can “discriminate among births” to weed out the “defective people” such as the disabled, the criminally inclined, and even “the melancholic type” and those of “lethargic dispositions and weak imaginations.”

Link to the rest at The American Interest


 

Tolkien Estate Disavows Forthcoming Film

From The Guardian:

The family and estate of JRR Tolkien have fired a broadside against the forthcoming film starring Nicholas Hoult as a young version of the author, saying that they “do not endorse it or its content in any way”.

Out in May, and starring Hoult in the title role and Lily Collins as his wife Edith, Tolkien explores “the formative years of the renowned author’s life as he finds friendship, courage and inspiration among a fellow group of writers and artists at school”. Directed by Dome Karukoski, it promises to reveal how “their brotherhood strengthens as they grow up … until the outbreak of the first world war which threatens to tear their fellowship apart”, all of which, according to studio Fox Searchlight, would inspire Tolkien to “write his famous Middle-earth novels”.

. . . .

On Tuesday morning, the estate and family of Tolkien issued a terse statement in which they announced their “wish to make clear that they did not approve of, authorise or participate in the making of this film”, and that “they do not endorse it or its content in any way”.

. . . .

John Garth, author of the biography Tolkien and the Great War, said he felt the estate’s response to the film was “sensible”.

“Biopics typically take considerable licence with the facts, and this one is no exception. Endorsement by the Tolkien family would lend credibility to any divergences and distortions. That would be a disservice to history,” he said. “As a biographer, I expect I’ll be busy correcting new misconceptions arising from the movie. I hope that anyone who enjoys the film and is interested in Tolkien’s formative years will pick up a reliable biography.”

Tolkien’s estate has been careful to protect his legacy. In 2011, it took legal action over a novel that used the author as a central character, months after his heirs settled a multimillion-pound lawsuit over royalties from the Lord of the Rings films. In 2012, the estate also took legal action over gambling games featuring Lord of the Rings characters, saying that it was “causing irreparable harm to Tolkien’s legacy and reputation and the valuable goodwill generated by his works”.

Link to the rest at The Guardian

 

Tolkien’s Art: Full of Color & Magic

From The National Review:

Casual observers probably think of elves, rings, and large glowing eyes when they hear his name. Literary enthusiasts know him through his most famous books, collectively known as The Lord of the Rings. Diehard fans know both these and his lesser-known but equally beautiful tales, including The Silmarillion and The Father Christmas Letters. If you take your undying love for J. R .R. Tolkien just one step further, you’ll walk right into a compact room on the second floor of New York City’s Morgan Library. And it is here that you will discover a new and enchanting side of this master storyteller and begin to understand his dedication to the world he spent his life creating.

Tolkien: Maker of Middle-earth is a carefully curated collection of the author’s artwork, maps, manuscripts, and memorabilia — the first exhibit of his work to take this particular angle. From a visitor’s perspective, the detail and care taken in the presentation of this exhibit show the intense planning and forethought given by the museum’s curators. Every aspect is intended to help immerse the viewer in Tolkien’s imagination.

. . . .

Each stage of Tolkien’s life is well marked, by placards with dates but also by the wall color. Nothing distracting, but decidedly distinct. Furthermore, a few of Tolkien’s more detailed images from his Lord of the Rings trilogy — such as Bilbo encountering Smaug and a bird’s-eye view of Hobbiton — were enlarged to cover walls. This gives viewers a chance to see detail on a different scale, and then enjoy it in miniature when they see the original a few moments later.

. . . .

Each display is unique and delightful in its own way, from Tolkien’s doodles and designs on newspaper clippings to drafts of his dust-jacket design for the first edition of The Hobbit. (He originally drew the sun on the front in bright red, but his publisher covered it in white and wrote “no red” because of the added expense.)

. . . .

Dust jacket design for The Hobbit, April 1937, by J. R. R. Tolkien. Pencil, black ink, watercolor, gouache. (Bodleian Libraries, MS. Tolkien Drawings 32. © The Tolkien Estate Limited 1937.)

Link to the rest at The National Review

 

What’s in a Name? Authors on Choosing Names for Their Characters

From The Guardian:

According to series creator Bruce Miller, the third series of The Handmaid’s Tale, soon to be on our screens, is going to be a “lot more rebellious”. “I think June’s taken a lot,” he says, “it’s time for her to give back some.” But close readers of Margaret Atwood’s feminist dystopian novel in which the Emmy-winning drama is rooted will know that June Osborne, played by Elisabeth Moss, is never given that name in the book. Her character, struggling to survive in the Republic of Gilead, is referred to simply as Offred.

One of the many reasons Atwood’s modern classic has proved so enduring is her inventive use of names. “This name is composed of a man’s first name, ‘Fred’, and a prefix denoting ‘belonging to’, so it is like ‘de’ in French or ‘von’ in German,” Atwood has written, “Within this name is concealed another possibility: ‘offered; denoting a religious offering or a victim offered for sacrifice.” While Atwood, whose sequel The Testaments will be published in September, never intended Offred to have any other name, she accepts that readers now use June. “Some have deduced that Offred’s real name is June, since, of all the names whispered among the Handmaids in the gymnasium/dormitory, ‘June’ is the only one that never appears again. That was not my original thought but it fits.”

From Atwood’s handmaids to amoral A&R man Steven Stelfox in John Niven’s Kill Your Friends (and recent follow up, Kill ’Em All) and Ian Rankin’s much-loved Inspector Rebus, authors often choose names to signpost a character’s traits or position in society. Think of Hannibal Lecter in The Silence of the Lambs, surely made more memorable by a name that rhymes with cannibal. And it can’t be coincidence that Thomas Harris’s FBI recruit Clarice Starling’s name makes her reminiscent of a small bird, finding her way in the world.

Rankin has explained his choice of character name: “I was studying literary theory when I wrote that book, and I liked the notion of stories as games played between author and reader. Later I was told Rebus is also a Polish surname – so I now occasionally mention that he has Polish roots.”

Link to the rest at The Guardian

I Want to Lose Myself in an Epic Series This Spring

From The Guardian:

Q: I am keen to get lost this spring in a long, epic series of books. What can you recommend? (I loved both The Lord of the Rings and Elena Ferrante’s Neapolitan novels, for example). 

. . . .

A: Author and critic Amanda Craig, whose latest novel, The Lie of the Land, is published by Abacus, writes:
Whether it has swords and dragons or gangsters and husbands, the feeling of entering an alternative reality is something we all need. Fantasy and realism are not mutually exclusive. The demands are the same.

. . . .

What we seek in the best epics is what we have always found: a heightened sense of life’s struggle, the consolations of justice, the fidelity of friends and a wonderful story.

You may already be familiar with Ursula le Guin’s Earthsea novels, where magic is controlled through language. These are among the finest ever written, being, at one level, about a young man’s adventure into manhood, and, at another, about the artist’s quest for mastery.

Less familiar, perhaps, is Sebastien de Castell’s Greatcoats quartet, concerning a band of fighting magistrates now working as mercenaries in the land of Tristia. Funny, fast paced and romantic, and the narrator Falcio is wholly beguiling.

Link to the rest at The Guardian

As Old as Adam

From The Times Literary Supplement:

An extract from Ian McEwan’s new novel Machines Like Me

 

In an alternative 1982, our narrator, Charlie, has just purchased a limited-edition robot, Adam, “the first truly viable manufactured human with plausible intelligence and looks”. Upstairs, Charlie’s neighbour, Miranda, is preparing to come round for dinner…

He stood before me, perfectly still in the gloom of the winter’s afternoon. The debris of the packaging that had protected him was still piled around his feet. He emerged from it like Botticelli’s Venus rising from her shell. Through the north-facing window, the diminishing light picked out the outlines of just one half of his form, one side of his noble face. The only sounds were the friendly murmur of the fridge and a muted drone of traffic. I had a sense then of his loneliness, settling like a weight around his muscular shoulders. He had woken to find himself in a dingy kitchen, in London SW9 in the late twentieth century, without friends, without a past or any sense of his future. He truly was alone. All the other Adams and Eves were spread about the world with their owners, though seven Eves were said to be concentrated in Riyadh.

As I reached for the light switch I said, ‘How are you feeling?’

He looked away to consider his reply. ‘I don’t feel right.’

This time his tone was flat. It seemed my question had lowered his spirits. But within such microprocessors, what spirits?

‘What’s wrong?’

‘I don’t have any clothes. And—’

‘I’ll get you some. What else?’

‘This wire. If I pull it out it will hurt.’

‘I’ll do it and it won’t hurt.’

But I didn’t move immediately. In full electric light I was able to observe his expression, which barely shifted when he spoke. It was not an artificial face I saw, but the mask of a poker player. Without the lifeblood of a personality, he had little to express. He was running on some form of default program that would serve him until the downloads were complete. He had movements, phrases, routines that gave him a veneer of plausibility. Minimally, he knew what to do, but little else. Like a man with a shocking hangover.

Link to the rest at The Times Literary Supplement


The Machines That Will Read Your Mind

From The Wall Street Journal:

When magnetic resonance imaging came into common use in the 1980s, it made the human brain visible in ways it had never been before. For the first time, we could see the soft brain tissue of a living subject, at a level of detail that could be observed previously only in autopsies. For doctors trying to help patients whose brains were damaged or diseased, MRI provided an invaluable snapshot of their condition.

By the 1990s, researchers had begun to measure changes in brain regions by using “functional” MRI. The technique detects oxygenated blood flow, revealing brain activity, not just brain structure. For cognitive neuroscientists, who study mental processes, fMRI was a godsend: It made it possible to identify which parts of the brain react to, say, faces, words or smells. It was a window through which to see the brain making sense of the external world. Suddenly we could watch human thought rippling across the rainbow-colored regions of brain scans.

Today, fMRI has been joined by newer tools, some still in development, that would allow scientists to track our mental states with ever greater precision. Researchers are generating enormous quantities of brain scan information, and they are analyzing these sets of “big data” with the latest computational techniques, especially machine learning, a subfield of AI that specializes in finding subtle, hard-to detect patterns.

What does all of this amount to? The start of a revolution. Scientists are beginning to unravel the question of how our material brains form our intangible minds. Though primarily motivated by medical and therapeutic goals, this research may have the greatest practical impact in areas such as product marketing, computer interfaces and criminal justice. Ultimately, it may help to answer fundamental questions about consciousness and free will, or even lead the way to preserving the knowledge and memories of individuals long after their bodies have failed.

. . . .

In fact, sensing what words or word categories you are thinking about is one of the more impressive results of modern cognitive neuroscience. Jack Gallant and his caborators at the University of California, Berkeley, have produced a remarkably detailed map of which sections of the brain react to different words and semantic concepts. In a 2016 paper in the journal Nature, they described an experiment in which seven volunteers listened to two hours of stories from “The Moth Radio Hour,” a popular storytelling podcast, while their heads rested in the custom-formed cradle of an fMRI machine.

. . . .

The researchers recorded changes in blood flow to each of tens of thousands of “voxels”—the units in a three-dimensional grid of locations in the brain. They then grouped the words spoken in the stories into 985 categories, each representing some common semantic dimension. (For example, the words “month” and “week” fall into the same category.) By correlating the brain activity with the words used to tell the stories, they were able to produce a detailed map revealing where these words and concepts were processed in the brain.

. . . .

Looking solely at their brain scans, the researchers were able to correctly identify which of eight such different tasks new subjects were performing about 80% of the time. It appears that how our brains work isn’t as unique to us as individuals as we might like to think.

With improved imaging technology, it may become possible to “eavesdrop” on a person’s internal dialogue, to the extent that they are thinking in words. “It’s a question of when, not if,” Dr. Gallant said. Other researchers are having similar success in determining what you may be looking at, whether you remember visiting a particular place or what decision you have made.

. . . .

Consider lie detection. At least two companies—No Lie MRI and Cephos—have tried to commercialize brain imaging systems that purport to tell whether a person believes he or she is telling the truth, by comparing a subject’s differing reactions to innocuous versus “loaded” questions. Their claims haven’t been independently validated and have received considerable criticism from the research community; so far, courts have declined to accept their results as evidence.

Another approach to assessing a suspect’s guilt or innocence is to determine whether he or she is acquainted with some unique aspect of a crime, such as its location, a particular weapon or the victim’s face. Several studies have shown that the brain’s reaction to familiar stimuli differs in measurable ways from unfamiliar ones. Anthony Wagner and his collaborators at the Stanford Memory Lab found that they could detect whether subjects believed they were familiar with a particular person’s face with 80% or better accuracy, under controlled conditions, though they noted in later research that subjects can intentionally fool the program. So—if the kinks can be worked out—crimes of the future may be solved by a “reverse lineup” to determine if a suspect recognizes the victim.

Though the current expert consensus is that these techniques are not yet reliable enough for use in law enforcement, information of this kind could revolutionize criminal proceedings. We may not be able to play back a defendant’s recollection of a crime as though it were a video, but determining whether they have memories of the crime scene or the victim may play as crucial a role in future trials as DNA evidence does today. Needless to say, the use of such technology would raise a range of ethical and constitutional issues.

. . . .

The new technologies may render moot the debate over torture and its supposed efficacy. “Enhanced interrogation” would become a thing of the past if investigators could directly query a suspected terrorist’s mind to reveal co-conspirators and targets. The world will have to decide whether such methods meet human-rights standards, especially since authoritarian governments would almost certainly use them to try to identify subversive thoughts or exposure to prohibited ideas or materials.

. . . .

Brain monitoring could also become more routine in employment. Selected high-speed train drivers and other workers in China already wear brain monitoring devices while on duty to detect fatigue and distraction. The South China Morning Post reports that some employees and government workers in China are required to wear sensors concealed in safety helmets or uniforms to detect depression, anxiety or rage. One manager at a logistics company stated that “It has significantly reduced the number of mistakes made by our workers.”

Link to the rest at The Wall Street Journal

What could go wrong?