Fantasy/SciFi

Rare Harry Potter book sells for £50,000 after being kept for decades in code-locked briefcase

13 October 2019

From Birmingham Live:

A rare first-edition of Harry Potter which was kept in pristine condition in a code-locked briefcase for decades has fetched a magic £50,000 at auction.

The hardback book is one of just 500 original copies of Harry Potter and the Philosopher’s Stone released in 1997 when JK Rowling was relatively unknown.

Its careful owners had kept the book safely stored away in a briefcase at their home, which they unlocked with a code, in order to preserve the treasured family heirloom.

Book experts had said the novel was in the best condition they have ever seen and estimated it could fetch £25,000 to £30,000 when it went under the hammer.

But the novel smashed its auction estimate when it was bought by a private UK buyer for a total price of £57,040 following a bidding war today (Thurs).

. . . .

Jim Spencer, books expert at Hansons, said: “I’m absolutely thrilled the book did so well – it deserved to.

“I couldn’t believe the condition of it – almost like the day it was made. I can’t imagine a better copy can be found.

“A 1997 first edition hardback of Harry Potter and the Philosopher’s Stone is the holy grail for collectors as so few were printed.

“The owners took such great care of their precious cargo they brought it to me in a briefcase, which they unlocked with a secret code.

“It felt like we were dealing in smuggled diamonds.”

Link to the rest at Birmingham Live

Why J.K. Rowling Should Walk Away From Harry Potter Forever

20 September 2019

Note: This post is a few years old, but PG thinks it might be useful for authors writing in a variety of genres.

From The Legal Artist:

The other day, J.K. Rowling gave an interview with Matt Lauer about her charity Lumos and mentioned she probably wouldn’t write another story about Harry and the gang, although she wouldn’t foreclose the opportunity altogether. I don’t know whether Rowling will ever return to Harry Potter but I do know that she shouldn’t. In fact, I think she should relinquish all rights to the Potterverse before she messes it all up.

Okay what? Messes it up? J.K. Rowling is a goddamn international treasure and I should be strung up by the neck for thinking such heretical thoughts, right? Well maybe, but first let me say that I have nothing but admiration for Rowling’s skill and artistry. The books and films stand as towering achievements in their respective fields and the world is undoubtedly a better place with Harry Potter than it would be without. And that’s exactly the problem.

We revere authors and creators of valuable intellectual property. We assume they know what’s best when it comes to their work. And sometimes that’s true! George R.R. Martin certainly believes it. The general sentiment is that his voice is the only one worthy of steering the Game of Thrones ship. The same probably would have been said about J.R.R. Tolkien and Sir Arthur Conan Doyle. But as fans, I think we’ve been burned by too many Special Editions/ Director’s Cuts/ sequels/ prequels/ sidequels/ reboots/ and preboots to feel anything but trepidation when a creator remains involved for too long with their own work. I get it. It’s your baby, and it’s hard to walk away from something that you poured your heart and soul into. But I’m a firm believer in the Death of the Author, and I’ve stated on this blog several times that when a work takes on a certain level of cultural importance, it transcends the law and becomes the property of society at large, not just the creator. That was the original intention when copyright protections were baked into the Constitution. Remember too that history is replete with authors who aren’t the best judges of their own work; George Lucas is a prime example of how far from grace one can fall simply by sticking around for too long. And I want Rowling to avoid that fate.

. . . .

Obviously the law allows Rowling to do whatever she wants. Copyright law, particularly in the U.S., isn’t equipped to consider the cultural importance of works like Star Wars or Harry Potter. The result is that all art, regardless of quality, is treated the same, which can be a good thing because it prevents systemic discrimination. The downside to that approach is that financial reward becomes the only measure of success. And that just makes it harder to let go. It’s easy to convince yourself that you and only you are capable of maintaining the integrity of the work over the long haul. It becomes even easier if there’s a lot of money to be made by doing it. The law incentivizes you to stay. And because copyright terms last for so long (life of the author plus 70 years), Rowling’s great great grandchildren will be able to profit from her work.  And I think it’s a shame to keep something like that so closed-source.

To my eyes, the seams are already showing. Three years ago, Rowling publicly stated that she wished she had killed Ron out of spite and that Hermione really should’ve ended up with Harry. The fact that she admitted this publicly is problematic enough – it shows a tone-deafness to the effect her words have on the fan-base (which is surprising considering her generosity to her fans). It also suggests that she might not have a full grasp of what makes the story work (i.e. that Harry’s arc isn’t about romance).

Link to the rest at The Legal Artist

When Bots Teach Themselves to Cheat

1 August 2019

From Wired:

Once upon a time, a bot deep in a game of tic-tac-toe figured out that making improbable moves caused its bot opponent to crash. Smart. Also sassy.

Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet’s DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments.

The specimens collected by Krakovna and fellow bug hunters point to a communication problem between humans and machines: Given a clear goal, an algorithm can master complex tasks, such as beating a world champion at Go. But even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.

Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs.

These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout.

“Seeing these systems be creative and do things you never thought of, you recognize their power and danger,” says Jeff Clune, a researcher at Uber’s AI lab. A recent paper that Clune coauthored, which lists 27 examples of algorithms doing unintended things, suggests future engineers will have to collaborate with, not command, their creations. “Your job is to coach the system,” he says. Embracing flashes of artificial creativity may be the solution to containing them.

. . . .

  • Infanticide: In a survival simulation, one AI species evolved to subsist on a diet of its own children.

. . . .

  • Optical Illusion: Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.

Link to the rest at Wired

New Documentary Focuses on Ursula K. Le Guin

31 July 2019

From The Wall Street Journal:

“Worlds of Ursula K. Le Guin” is the first documentary about the pioneering science-fiction writer—and pretty much the first film of any kind to showcase her work. Although Ms. Le Guin was writing about dragons and wizard schools back in 1968 for her Earthsea series, there have been no high-profile movies based on her 20 novels or more than 100 short stories.

“I don’t think Harry Potter would have existed without Earthsea existing,” author Neil Gaiman says in the documentary, which premieres Friday on PBS. Ms. Le Guin’s Earthsea cycle, a young-adult series about a sprawling archipelago of island kingdoms, included five novels and many stories written between 1968 and 2001.

Other writers who discuss Ms. Le Guin’s work and influence in the film include Margaret Atwood (“The Handmaid’s Tale”), David Mitchell (“Cloud Atlas”) and Michael Chabon (“The Amazing Adventures of Kavalier & Clay”).

“I think she’s one of the greatest writers that the 20th-century American literary scene produced,” Mr. Chabon says.

. . . .

“I never wanted to be a writer—I just wrote,” she says in the film. Believing science fiction should be less about predicting the future than observing the present, she invented fantastical worlds that were their own kind of anthropology, exploring how societies work.

In her 1969 novel “The Left Hand of Darkness,” she introduces a genderless race of beings who are sexually active once a month, either as a man or woman—but don’t know which it will be. Her 1973 short story, “The Ones Who Walk Away From Omelas,” introduces a utopian city where everyone is happy. But readers learn that this blissful world is entirely dependent on one child being imprisoned in a basement and mistreated. The joy of all the people hinges on the child being forced to suffer, and everyone knows it. The author had been horrified to learn through her father’s research about the slaughter of native tribes that made modern California possible.

. . . .

As a female sci-fi writer, “my species was once believed to be mythological, like the tribble and the unicorn,” Ms. Le Guin said in an address before the 1975 Worldcon science-fiction convention in Melbourne, Australia. Her work was called feminist sci-fi, but she grew into that label awkwardly. “There was a considerable feeling that we needed to cut loose from marriage, from men, and from motherhood. And there was no way I was gonna do that,” she said. “Of course I can write novels with one hand and bring up three kids with the other. Yeah, sure. Watch me.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

 

‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos

28 July 2019

Perhaps a writing prompt.

From The Wall Street Journal:

Startup companies, government agencies and academics are racing to combat so-called deepfakes, amid fears that doctored videos and photographs will be used to sow discord ahead of next year’s U.S. presidential election.

It is a difficult problem to solve because the technology needed to manipulate images is advancing rapidly and getting easier to use, according to experts. And the threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters, leaving companies that run those platforms unsure how to handle the issue.

“While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly. I’d predict we see visually undetectable deepfakes in less than 12 months,” said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology. “Society is going to start distrusting every piece of content they see.”

Truepic is working with Qualcomm Inc. —the biggest supplier of chips for mobile phones—to add its technology to the hardware of cellphones. The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. Truepic also offers a free app consumers can use to take verified pictures on their smartphones.

. . . .

When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites. The company says it has partnerships with insurance companies that use the technology to help verify damage claims, though it declined to name the firms.

The U.S. Defense Department, meanwhile, is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Star Trek – Picard

21 July 2019

How Stanley Kubrick Staged the Moon Landing

19 July 2019

From The Paris Review:

Have you ever met a person who’s been on the moon? There are only four of them left. Within a decade or so, the last will be dead and that astonishing feat will pass from living memory into history, which, sooner or later, is always questioned and turned into fable. It will not be exactly like the moment the last conquistador died, but will lean in that direction. The story of the moon landing will become a little harder to believe.

I’ve met three of the twelve men who walked on the moon. They had one important thing in common when I looked into their eyes: they were all bonkers. Buzz Aldrin, who was the second off the ladder during the first landing on July 20, 1969, almost exactly fifty years ago—he must have stared with envy at Neil Armstrong’s crinkly space-suit ass all the way down—has run hot from the moment he returned to earth. When questioned about the reality of the landing—he was asked to swear to it on a Bible—he slugged the questioner. When I sat down with Edgar Mitchell, who made his landing in the winter of 1971, he had that same look in his eyes. I asked about the space program, but he talked only about UFOs. He said he’d been wrapped in a warm consciousness his entire time in space. Many astronauts came back with a belief in alien life.

Maybe it was simply the truth: maybe they had been touched by something. Or maybe the experience of going to the moon—standing and walking and driving that buggy and hitting that weightless golf ball—would make anyone crazy. It’s a radical shift in perspective, to see the earth from the outside, fragile and small, a rock in a sea of nothing. It wasn’t just the astronauts: everyone who saw the images and watched the broadcast got a little dizzy.

July 20 1969, 3:17 P.M. E.S.T. The moment is an unacknowledged hinge in human history, unacknowledged because it seemed to lead nowhere. Where are the moon hotels and moon amusement parks and moon shuttles we grew up expecting? But it did lead to something: a new kind of mind. It’s not the birth of the space age we should be acknowledging on this fiftieth anniversary, but the birth of the paranoia that defines us. Because a man on the moon was too fantastic to accept, some people just didn’t accept it, or deal with its implications—that sea of darkness. Instead, they tried to prove it never happened, convince themselves it had all been faked. Having learned the habit of conspiracy spotting, these same people came to question everything else, too. History itself began to read like a fraud, a book filled with lies.

. . . .

The stories of a hoax predate the landing itself. As soon as the first capsules were in orbit, some began to dismiss the images as phony and the testimony of the astronauts as bullshit. The motivation seemed obvious: John F. Kennedy had promised to send a man to the moon within the decade. And, though we might be years behind the Soviets in rocketry, we were years ahead in filmmaking. If we couldn’t beat them to moon, we could at least make it look like we had.

Most of the theories originated in the cortex of a single man: William Kaysing, who’d worked as a technical writer for Rocketdyne, a company that made engines. Kaysing left Rocketdyne in 1963, but remained fixated on the space program and its goal, which was often expressed as an item on a Cold War to-do list—go to the  moon: check—but was in fact profound, powerful, surreal. A man on the moon would mean the dawn of a new era. Kaysing believed it unattainable, beyond the reach of existing technology. He cited his experience at Rocketdyne, but, one could say he did not believe it simply because it was not believable. That’s the lens he brought to every NASAupdate. He was not watching for what had happened, but trying to figure out how it had been staged.

There were six successful manned missions to the moon, all part of Apollo. A dozen men walked the lunar surface between 1969 and 1972, when Harrison H. Schmitt—he later served as a Republican U.S. Senator from New Mexico—piloted the last lander off the surface. When people dismiss the project as a failure—we never went back because there is nothing for us there—others point out the fact that twenty-seven years passed between Columbus’s first Atlantic crossing and Cortez’s conquest of Mexico, or that 127 years passed between the first European visit to the Mississippi River and the second—it’d been “discovered,” “forgotten,” and “discovered” again. From some point in the future, our time, with its celebrities, politicians, its happiness and pain, might look like little more than an interregnum, the moment between the first landing and the colonization of space.

. . . .

Kaysing catalogued inconsistencies that “proved” the landing had been faked. There have been hundreds of movies, books, and articles that question the Apollo missions; almost all of them have relied on Kaysing’s “discoveries.”

  1. Old Glory: The American flag the astronauts planted on the moon, which should have been flaccid, the moon existing in a vacuum, is taut in photos, even waving, reveling more than NASA intended. (Knowing the flag would be flaccid, and believing a flaccid flag was no way to declare victory, engineers fitted the pole with a cross beam on which to hang the flag; if it looks like its waving, that’s because Buzz Aldrin was twisting the pole, screwing it into the lunar soil).
  2. There’s only one source of light on the moon—the sun—yet the shadows of the astronauts fall every which way, suggesting multiple light sources, just the sort you might find in a movie studio. (There were indeed multiple sources of light during the landings—it came from the sun, it came from the earth, it came from the lander, and it came from the astronauts’ space suits.)
  3. Blast Circle: If NASA had actually landed a craft on the moon, it would have left an impression and markings where the jets fired during takeoff. Yet, as can be seen in NASA’s own photos, there are none. You know what would’ve left no impression? A movie prop. Conspiracy theorists point out what looks like a C written on one of the moon rocks, as if it came straight from the special effects department. (The moon has about one-fifth the gravity of earth; the landing was therefore soft; the lander drifted down like a leaf. Nor was much propulsion needed to send the lander back into orbit. It left no impression just as you leave no impression when you touch the bottom of a pool; what looks like a C is probably a shadow.)
  4. Here you are, supposedly in outer space, yet we see no stars in the pictures. You know where else you wouldn’t see stars? A movie set. (The moon walks were made during the lunar morning—Columbus went ashore in daylight, too. You don’t see stars when the sun is out, nor at night in a light-filled place, like a stadium or a landing zone).
  5. Giant Leap for Mankind: If Neil Armstrong was the first man on the moon, then who was filming him go down the ladder? (A camera had been mounted to the side of the lunar module).

Kaysing’s alternate theory was elaborate. He believed the astronauts had been removed from the ship moments before takeoff, flown to Nevada, where, a few days later, they broadcast the moon walk from the desert. People claimed to have seen Armstrong walking through a hotel lobby, a show girl on each arm. Aldrin was playing the slots. They were then flown to Hawaii and put back inside the capsule after the splash down but before the cameras arrived.

. . . .

Of all the fables that have grown up around the moon landing, my favorite is the one about Stanley Kubrick, because it demonstrates the use of a good counternarrative. It seemingly came from nowhere, or gave birth to itself simply because it made sense. (Finding the source of such a story is like finding the source of a joke you’ve been hearing your entire life.) It started with a simple question: Who, in 1969, would have been capable of staging a believable moon landing?

Kubrick’s masterpiece, 2001: A Space Odyssey, had been released the year before. He’d plotted it with the science fiction master Arthur C. Clarke, who is probably more responsible for the look of our world, smooth as a screen, than any scientist. The manmade satellite, GPS, the smart phone, the space station: he predicted, they built. 2001 picked up an idea Clarke had explored in his earlier work, particularly his novel Childhood’s End—the fading of the human race, its transition from the swamp planet to the star-spangled depths of deep space. In 2001, change comes in the form of a monolith, a featureless black shard that an alien intelligence—you can call it God—parked on an antediluvian plain. Its presence remakes a tribe of apes, turning them into world-exploring, tool-building killers who will not stop until they find their creator, the monolith, buried on the dark side of the moon. But the plot is not what viewers, many of them stoned, took from 2001. It was the special effects that lingered, all that technology, which was no less than a vision, Ezekiel-like in its clarity, of the future. Orwell had seen the future as bleak and authoritarian; Huxley had seen it as a drug-induced dystopia. In the minds Kubrick and Clarke, it shimmered, luminous, mechanical, and cold.

Most striking was the scene set on the moon, in which a group of astronauts, posthuman in their suits, descend into an excavation where, once again, the human race comes into contact with the monolith. Though shot in a studio, it looks more real than the actual landings.

Link to the rest at The Paris Review

.

The Debate over De-Identified Data: When Anonymity Isn’t Assured

11 July 2019

Not necessarily about writing or publishing, but an interesting 21st-century issue.

From Legal Tech News

As more algorithm-coded technology comes to market, the debate over how individuals’ de-identified data is being used continues to grow.

A class action lawsuit filed in a Chicago federal court last month highlights the use of sensitive de-identified data for commercial means. Plaintiffs represented by law firm Edelson allege the University of Chicago Medical Center gave Google the electronic health records (EHR) of nearly all of its patients from 2009 to 2016, with which Google would create products. The EHR, which is a digital version of a patient’s paper chart, includes a patient’s height, weight, vital signs and medical procedure and illness history.

While the hospital asserted it did de-identify data, Edelson claims the hospital included date and time stamps and “copious” free-text medical notes that, combined with Google’s other massive troves of data, could easily identify patients, in noncompliance with the Health Insurance Portability and Accountability Act (HIPAA).

. . . .

“I think the biggest concern is the quantity of information Google has about individuals and its ability to reidentify information, and this gray area of if HIPAA permits it if it was fully de-identified,” said Fox Rothschild partner Elizabeth Litten.

Litten noted that transferring such data to Google, which has a host of information collected from other services, makes labeling data “de-identified” risky in that instance. “I would want to be very careful with who I share my de-identified data with, [or] share information with someone that doesn’t have access to a lot of information. Or [ensure] in the near future the data isn’t accessed by a bigger company and made identifiable in the future,” she explained.

If the data can be reidentified, it may also fall under the scope of the European Union’s General Data Protection Regulation (GDPR) or California’s upcoming data privacy law, noted Cogent Law Group associate Miles Vaughn.

Link to the rest at Legal Tech News

De-identified data is presently an important component in the development of artificial intelligence systems.

As PG understands it, a large mass of data concerning almost anything, but certainly including data about human behavior, is dumped into a powerful computer which is tasked with discerning patterns and relationships within the data.

The more data regarding individuals that goes into the AI hopper, the more can be learned about groups of individuals and relationships between individuals or behavior patterns of individuals that may not be generally known or discoverable by other, more traditional methods of data analysis and the resultant learning such analysis generates.

As a crude example based upon the brief description in the OP, an artificially intelligent system that had access to the medical records described in the OP and also the usage records for individuals using Ventra cards (contactless digital payment cards that are electronically scanned) on the Chicago Transit Authority could conceivably identify a specific individual associated with an anonymous medical record by correlating Ventra card use at a nearby transit stop with the time stamps on the digital medical record entries.

Next Page »