Fantasy/SciFi

When Bots Teach Themselves to Cheat

1 August 2019

From Wired:

Once upon a time, a bot deep in a game of tic-tac-toe figured out that making improbable moves caused its bot opponent to crash. Smart. Also sassy.

Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet’s DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments.

The specimens collected by Krakovna and fellow bug hunters point to a communication problem between humans and machines: Given a clear goal, an algorithm can master complex tasks, such as beating a world champion at Go. But even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.

Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs.

These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout.

“Seeing these systems be creative and do things you never thought of, you recognize their power and danger,” says Jeff Clune, a researcher at Uber’s AI lab. A recent paper that Clune coauthored, which lists 27 examples of algorithms doing unintended things, suggests future engineers will have to collaborate with, not command, their creations. “Your job is to coach the system,” he says. Embracing flashes of artificial creativity may be the solution to containing them.

. . . .

  • Infanticide: In a survival simulation, one AI species evolved to subsist on a diet of its own children.

. . . .

  • Optical Illusion: Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.

Link to the rest at Wired

New Documentary Focuses on Ursula K. Le Guin

31 July 2019

From The Wall Street Journal:

“Worlds of Ursula K. Le Guin” is the first documentary about the pioneering science-fiction writer—and pretty much the first film of any kind to showcase her work. Although Ms. Le Guin was writing about dragons and wizard schools back in 1968 for her Earthsea series, there have been no high-profile movies based on her 20 novels or more than 100 short stories.

“I don’t think Harry Potter would have existed without Earthsea existing,” author Neil Gaiman says in the documentary, which premieres Friday on PBS. Ms. Le Guin’s Earthsea cycle, a young-adult series about a sprawling archipelago of island kingdoms, included five novels and many stories written between 1968 and 2001.

Other writers who discuss Ms. Le Guin’s work and influence in the film include Margaret Atwood (“The Handmaid’s Tale”), David Mitchell (“Cloud Atlas”) and Michael Chabon (“The Amazing Adventures of Kavalier & Clay”).

“I think she’s one of the greatest writers that the 20th-century American literary scene produced,” Mr. Chabon says.

. . . .

“I never wanted to be a writer—I just wrote,” she says in the film. Believing science fiction should be less about predicting the future than observing the present, she invented fantastical worlds that were their own kind of anthropology, exploring how societies work.

In her 1969 novel “The Left Hand of Darkness,” she introduces a genderless race of beings who are sexually active once a month, either as a man or woman—but don’t know which it will be. Her 1973 short story, “The Ones Who Walk Away From Omelas,” introduces a utopian city where everyone is happy. But readers learn that this blissful world is entirely dependent on one child being imprisoned in a basement and mistreated. The joy of all the people hinges on the child being forced to suffer, and everyone knows it. The author had been horrified to learn through her father’s research about the slaughter of native tribes that made modern California possible.

. . . .

As a female sci-fi writer, “my species was once believed to be mythological, like the tribble and the unicorn,” Ms. Le Guin said in an address before the 1975 Worldcon science-fiction convention in Melbourne, Australia. Her work was called feminist sci-fi, but she grew into that label awkwardly. “There was a considerable feeling that we needed to cut loose from marriage, from men, and from motherhood. And there was no way I was gonna do that,” she said. “Of course I can write novels with one hand and bring up three kids with the other. Yeah, sure. Watch me.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

 

‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos

28 July 2019

Perhaps a writing prompt.

From The Wall Street Journal:

Startup companies, government agencies and academics are racing to combat so-called deepfakes, amid fears that doctored videos and photographs will be used to sow discord ahead of next year’s U.S. presidential election.

It is a difficult problem to solve because the technology needed to manipulate images is advancing rapidly and getting easier to use, according to experts. And the threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters, leaving companies that run those platforms unsure how to handle the issue.

“While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly. I’d predict we see visually undetectable deepfakes in less than 12 months,” said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology. “Society is going to start distrusting every piece of content they see.”

Truepic is working with Qualcomm Inc. —the biggest supplier of chips for mobile phones—to add its technology to the hardware of cellphones. The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. Truepic also offers a free app consumers can use to take verified pictures on their smartphones.

. . . .

When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites. The company says it has partnerships with insurance companies that use the technology to help verify damage claims, though it declined to name the firms.

The U.S. Defense Department, meanwhile, is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Star Trek – Picard

21 July 2019

How Stanley Kubrick Staged the Moon Landing

19 July 2019

From The Paris Review:

Have you ever met a person who’s been on the moon? There are only four of them left. Within a decade or so, the last will be dead and that astonishing feat will pass from living memory into history, which, sooner or later, is always questioned and turned into fable. It will not be exactly like the moment the last conquistador died, but will lean in that direction. The story of the moon landing will become a little harder to believe.

I’ve met three of the twelve men who walked on the moon. They had one important thing in common when I looked into their eyes: they were all bonkers. Buzz Aldrin, who was the second off the ladder during the first landing on July 20, 1969, almost exactly fifty years ago—he must have stared with envy at Neil Armstrong’s crinkly space-suit ass all the way down—has run hot from the moment he returned to earth. When questioned about the reality of the landing—he was asked to swear to it on a Bible—he slugged the questioner. When I sat down with Edgar Mitchell, who made his landing in the winter of 1971, he had that same look in his eyes. I asked about the space program, but he talked only about UFOs. He said he’d been wrapped in a warm consciousness his entire time in space. Many astronauts came back with a belief in alien life.

Maybe it was simply the truth: maybe they had been touched by something. Or maybe the experience of going to the moon—standing and walking and driving that buggy and hitting that weightless golf ball—would make anyone crazy. It’s a radical shift in perspective, to see the earth from the outside, fragile and small, a rock in a sea of nothing. It wasn’t just the astronauts: everyone who saw the images and watched the broadcast got a little dizzy.

July 20 1969, 3:17 P.M. E.S.T. The moment is an unacknowledged hinge in human history, unacknowledged because it seemed to lead nowhere. Where are the moon hotels and moon amusement parks and moon shuttles we grew up expecting? But it did lead to something: a new kind of mind. It’s not the birth of the space age we should be acknowledging on this fiftieth anniversary, but the birth of the paranoia that defines us. Because a man on the moon was too fantastic to accept, some people just didn’t accept it, or deal with its implications—that sea of darkness. Instead, they tried to prove it never happened, convince themselves it had all been faked. Having learned the habit of conspiracy spotting, these same people came to question everything else, too. History itself began to read like a fraud, a book filled with lies.

. . . .

The stories of a hoax predate the landing itself. As soon as the first capsules were in orbit, some began to dismiss the images as phony and the testimony of the astronauts as bullshit. The motivation seemed obvious: John F. Kennedy had promised to send a man to the moon within the decade. And, though we might be years behind the Soviets in rocketry, we were years ahead in filmmaking. If we couldn’t beat them to moon, we could at least make it look like we had.

Most of the theories originated in the cortex of a single man: William Kaysing, who’d worked as a technical writer for Rocketdyne, a company that made engines. Kaysing left Rocketdyne in 1963, but remained fixated on the space program and its goal, which was often expressed as an item on a Cold War to-do list—go to the  moon: check—but was in fact profound, powerful, surreal. A man on the moon would mean the dawn of a new era. Kaysing believed it unattainable, beyond the reach of existing technology. He cited his experience at Rocketdyne, but, one could say he did not believe it simply because it was not believable. That’s the lens he brought to every NASAupdate. He was not watching for what had happened, but trying to figure out how it had been staged.

There were six successful manned missions to the moon, all part of Apollo. A dozen men walked the lunar surface between 1969 and 1972, when Harrison H. Schmitt—he later served as a Republican U.S. Senator from New Mexico—piloted the last lander off the surface. When people dismiss the project as a failure—we never went back because there is nothing for us there—others point out the fact that twenty-seven years passed between Columbus’s first Atlantic crossing and Cortez’s conquest of Mexico, or that 127 years passed between the first European visit to the Mississippi River and the second—it’d been “discovered,” “forgotten,” and “discovered” again. From some point in the future, our time, with its celebrities, politicians, its happiness and pain, might look like little more than an interregnum, the moment between the first landing and the colonization of space.

. . . .

Kaysing catalogued inconsistencies that “proved” the landing had been faked. There have been hundreds of movies, books, and articles that question the Apollo missions; almost all of them have relied on Kaysing’s “discoveries.”

  1. Old Glory: The American flag the astronauts planted on the moon, which should have been flaccid, the moon existing in a vacuum, is taut in photos, even waving, reveling more than NASA intended. (Knowing the flag would be flaccid, and believing a flaccid flag was no way to declare victory, engineers fitted the pole with a cross beam on which to hang the flag; if it looks like its waving, that’s because Buzz Aldrin was twisting the pole, screwing it into the lunar soil).
  2. There’s only one source of light on the moon—the sun—yet the shadows of the astronauts fall every which way, suggesting multiple light sources, just the sort you might find in a movie studio. (There were indeed multiple sources of light during the landings—it came from the sun, it came from the earth, it came from the lander, and it came from the astronauts’ space suits.)
  3. Blast Circle: If NASA had actually landed a craft on the moon, it would have left an impression and markings where the jets fired during takeoff. Yet, as can be seen in NASA’s own photos, there are none. You know what would’ve left no impression? A movie prop. Conspiracy theorists point out what looks like a C written on one of the moon rocks, as if it came straight from the special effects department. (The moon has about one-fifth the gravity of earth; the landing was therefore soft; the lander drifted down like a leaf. Nor was much propulsion needed to send the lander back into orbit. It left no impression just as you leave no impression when you touch the bottom of a pool; what looks like a C is probably a shadow.)
  4. Here you are, supposedly in outer space, yet we see no stars in the pictures. You know where else you wouldn’t see stars? A movie set. (The moon walks were made during the lunar morning—Columbus went ashore in daylight, too. You don’t see stars when the sun is out, nor at night in a light-filled place, like a stadium or a landing zone).
  5. Giant Leap for Mankind: If Neil Armstrong was the first man on the moon, then who was filming him go down the ladder? (A camera had been mounted to the side of the lunar module).

Kaysing’s alternate theory was elaborate. He believed the astronauts had been removed from the ship moments before takeoff, flown to Nevada, where, a few days later, they broadcast the moon walk from the desert. People claimed to have seen Armstrong walking through a hotel lobby, a show girl on each arm. Aldrin was playing the slots. They were then flown to Hawaii and put back inside the capsule after the splash down but before the cameras arrived.

. . . .

Of all the fables that have grown up around the moon landing, my favorite is the one about Stanley Kubrick, because it demonstrates the use of a good counternarrative. It seemingly came from nowhere, or gave birth to itself simply because it made sense. (Finding the source of such a story is like finding the source of a joke you’ve been hearing your entire life.) It started with a simple question: Who, in 1969, would have been capable of staging a believable moon landing?

Kubrick’s masterpiece, 2001: A Space Odyssey, had been released the year before. He’d plotted it with the science fiction master Arthur C. Clarke, who is probably more responsible for the look of our world, smooth as a screen, than any scientist. The manmade satellite, GPS, the smart phone, the space station: he predicted, they built. 2001 picked up an idea Clarke had explored in his earlier work, particularly his novel Childhood’s End—the fading of the human race, its transition from the swamp planet to the star-spangled depths of deep space. In 2001, change comes in the form of a monolith, a featureless black shard that an alien intelligence—you can call it God—parked on an antediluvian plain. Its presence remakes a tribe of apes, turning them into world-exploring, tool-building killers who will not stop until they find their creator, the monolith, buried on the dark side of the moon. But the plot is not what viewers, many of them stoned, took from 2001. It was the special effects that lingered, all that technology, which was no less than a vision, Ezekiel-like in its clarity, of the future. Orwell had seen the future as bleak and authoritarian; Huxley had seen it as a drug-induced dystopia. In the minds Kubrick and Clarke, it shimmered, luminous, mechanical, and cold.

Most striking was the scene set on the moon, in which a group of astronauts, posthuman in their suits, descend into an excavation where, once again, the human race comes into contact with the monolith. Though shot in a studio, it looks more real than the actual landings.

Link to the rest at The Paris Review

.

The Debate over De-Identified Data: When Anonymity Isn’t Assured

11 July 2019

Not necessarily about writing or publishing, but an interesting 21st-century issue.

From Legal Tech News

As more algorithm-coded technology comes to market, the debate over how individuals’ de-identified data is being used continues to grow.

A class action lawsuit filed in a Chicago federal court last month highlights the use of sensitive de-identified data for commercial means. Plaintiffs represented by law firm Edelson allege the University of Chicago Medical Center gave Google the electronic health records (EHR) of nearly all of its patients from 2009 to 2016, with which Google would create products. The EHR, which is a digital version of a patient’s paper chart, includes a patient’s height, weight, vital signs and medical procedure and illness history.

While the hospital asserted it did de-identify data, Edelson claims the hospital included date and time stamps and “copious” free-text medical notes that, combined with Google’s other massive troves of data, could easily identify patients, in noncompliance with the Health Insurance Portability and Accountability Act (HIPAA).

. . . .

“I think the biggest concern is the quantity of information Google has about individuals and its ability to reidentify information, and this gray area of if HIPAA permits it if it was fully de-identified,” said Fox Rothschild partner Elizabeth Litten.

Litten noted that transferring such data to Google, which has a host of information collected from other services, makes labeling data “de-identified” risky in that instance. “I would want to be very careful with who I share my de-identified data with, [or] share information with someone that doesn’t have access to a lot of information. Or [ensure] in the near future the data isn’t accessed by a bigger company and made identifiable in the future,” she explained.

If the data can be reidentified, it may also fall under the scope of the European Union’s General Data Protection Regulation (GDPR) or California’s upcoming data privacy law, noted Cogent Law Group associate Miles Vaughn.

Link to the rest at Legal Tech News

De-identified data is presently an important component in the development of artificial intelligence systems.

As PG understands it, a large mass of data concerning almost anything, but certainly including data about human behavior, is dumped into a powerful computer which is tasked with discerning patterns and relationships within the data.

The more data regarding individuals that goes into the AI hopper, the more can be learned about groups of individuals and relationships between individuals or behavior patterns of individuals that may not be generally known or discoverable by other, more traditional methods of data analysis and the resultant learning such analysis generates.

As a crude example based upon the brief description in the OP, an artificially intelligent system that had access to the medical records described in the OP and also the usage records for individuals using Ventra cards (contactless digital payment cards that are electronically scanned) on the Chicago Transit Authority could conceivably identify a specific individual associated with an anonymous medical record by correlating Ventra card use at a nearby transit stop with the time stamps on the digital medical record entries.

Everyone Wants to Be the Next ‘Game of Thrones’

7 July 2019

From The Wall Street Journal:

Who will survive the Game of Clones?

The hunt is on for the next epic fantasy to fill the void left by the end of “Game of Thrones,”the HBO hit that averaged 45 million viewers per episode in its last season. In television, film and books, series that build elaborate worlds the same way the medieval-supernatural saga did are in high demand.

“There’s a little bit of a gold-rush mentality coming off the success of ‘Game of Thrones,’” says Marc Guggenheim, an executive producer of “Carnival Row,” a series with mythological creatures that arrives on Amazon Prime Video in August. “Everyone wants to tap into that audience.”

There’s no guarantee anyone will be able to replicate the success of “Thrones.” Entertainment is littered with copycats of other hits that fell flat. But the market is potentially large and lucrative. So studios are pouring millions into new shows, agents are brokering screen deals around book series that can’t get written fast enough and experts are readying movie-level visual effects for epic storytelling aimed at the couch.

. . . .

Literary agent Joanna Volpe represents three fantasy authors whose books now are being adapted for the screen. “‘Game of Thrones’ opened a door—it made studios hungrier for material like this,” she says. A decade ago, she adds, publishing and TV weren’t interested in fantasy for adults because only the rare breakout hit reached beyond the high-nerd niche.

. . . .

HBO doesn’t release demographic data on viewers, though cultural gatekeepers say they barely need it. “You know what type of audience you’re getting: It’s premium TV, it’s educated, it’s an audience you want to tap into,” says Kaitlin Harri, senior marketing director at publisher William Morrow. By the end of the series, the audience had broadened to include buzz seekers of all kinds with little interest in fantasy.

The show based on the books by George R.R. Martin ended its eight-year run in May, but it remains in the muscle memory of many die-hard fans. “I still look forward to Sunday nights thinking that at 9 o’clock I’m going to get a new episode,” says Samantha Ecker, a 35-year-old writer for “Watchers on the Wall,” which is still an active fan site. The memorabilia collector continues to covet all things “Throne.” Last week, she got a $15 figurine of Daenerys Targaryen sitting on the Iron Throne “since they didn’t let her do it in the show.”

. . . .

“Game of Thrones” has helped ring in a new era in fantasy writing, with heightened interest in powerful female characters. Authors generating excitement include R.F. Kuang, who soon releases “The Dragon Republic,” part of a fantasy series infused with Chinese history, and S.A. Chakraborty, whose Islamic-influenced series includes “The Kingdom of Copper,” out earlier this year.

For its fantasies featuring power struggles that might appeal to “Thrones” fans, Harper Voyager uses marketing trigger words like “politics,” “palace intrigue” and “succession,” says David Pomerico, editorial director of the imprint of HarperCollins, which like The Wall Street Journal is owned by News Corp.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)
.

Are Colleges Friendly to Fantasy Writers? It’s Complicated

29 June 2019

From Wired:

In an increasingly competitive publishing environment, more and more fantasy and science fiction writers are going back to school to get an MFA in creative writing. College writing classes have traditionally been hostile to fantasy and sci-fi, but author Chandler Klang Smith says that’s no longer the case.

“I definitely don’t think the landscape out there is hostile toward speculative writing,” Smith says in Episode 365 of the Geek’s Guide to the Galaxy podcast. “If anything I think it’s seen as being kind of exciting and sexy and new, which is something these programs want.”

But science fiction author John Kessel, who helped found the creative writing MFA program at North Carolina State University, says it really depends on the type of speculative fiction. Slipstream and magical realism may have acquired a certain cachet, but epic fantasy and space opera definitely haven’t.

“The more it seems like traditional science fiction, the less comfortable programs will be with it,” he says. “Basically if the story is set in the present and has some really odd thing in it, then I think you won’t raise as many eyebrows. But I think that traditional science fiction—anything that resembles Star Wars or Star Trek, or even Philip K. Dick—I think some places would look a little sideways at it.”

That uncertainty can put aspiring fantasy and science fiction writers in a tough spot, as writer Steph Grossmandiscovered when she was applying to MFA programs. “As an applicant—and even though I did a ton of research—it’s really hard to find which schools are going to be accepting of it and which schools aren’t,” she says. “The majority of them will be accepting of some aspect of it—especially if you’re writing things in the slipstream genre—but besides Sarah Lawrence College and Stonecoast, when I was looking, most of the schools don’t really touch on whether they’re accepting of it or not.”

Geek’s Guide to the Galaxy host David Barr Kirtley warns that writing fantasy and science fiction requires specialized skills and knowledge that most MFA programs simply aren’t equipped to teach.

“I would say that if you’re writing epic fantasy or sword and sorcery or space opera and things like that, I think you’d probably be much happier going to Clarion or Odyssey, these six week summer workshops where you’re going to be surrounded by more hardcore science fiction and fantasy fans,” he says. “And definitely do your research. Don’t just apply to your local MFA program and expect that you’re going to get helpful feedback on work like that.”

Link to the rest at Wired

Next Page »