The War On What’s Real

This content has been archived. It may no longer be accurate or relevant.

Not exactly to do with writing, but, potentially, a basketful of writing prompts.

From Fast Company:

Nine years ago, I sat at ground zero of the face-swapping revolution.  I was inside the screening room of Ed Ulbrich, who was hot off building the technology that had transformed Brad Pitt’s visage in The Curious Case of Benjamin Button. Ulbrich had welcomed me to his VFX studio Digital Domain to preview something that could top even the technical magic of Benjamin Button. He dimmed the lights, and the opening notes of his latest opus, Tron Legacy, began to play. Soon, I was face to face with a digitally-reconstructed Jeff Bridges, who wasn’t 60 years old anymore, but a spry 30. To Ulbrich, face-swapping was the Holy Grail of special effects–and we were witnessing its realization.

. . . .

“It was really hard, it was really slow, it was really tedious, it was really expensive,” Ulbrich said at the time. “And the next time we do it it’s going to be less difficult, and less slow and less expensive.” Digital Domain eventually pivoted to resurrecting Tupac as a hologram, and later declared bankruptcy. But the company left its mark: digital face-swapping has become a mainstay tool of Hollywood, putting Arnold Schwarzenegger in Terminator Genesis and Carrie Fisher in Star Wars: The Force Awakens.

The jaw-dropping effect is still fairly difficult and expensive, though–or it was until an anonymous Redditor named Deepfakes changed all that overnight and brought Ulbrich’s words back to me with perfect clarity. In early 2018, Deepfakes released a unique bit of code to the public that allows anyone to easily and convincingly map a face onto someone else’s head in full motion video. Then, another Redditor quickly created FakeApp, which gave the Deepfakes scripts a user-friendly front end.

. . . .

Finally, 2016 brought the meteoric rise of automated machine learning–systems that could improve upon themselves–which Deepfakes refers to as “steroids” on all the science that came before.

. . . .

The way Deepfakes’ tech is based on a relatively typical Generative Adversarial Network (GAN)–the sort used by basically every AI researcher in the world. Don’t think of a GAN as one AI but two, each vying to be the teacher’s pet. One AI, called the generator, draws faces while the other AI, called the discriminator, critiques them by trying to spot fakes. Repeating this process millions of times, both AIs continue to improve and make one another better in a battle that can be waged as long as your computer is plugged in. Deepfakes’ results are pretty convincing in about half a day of computing on a low-end PC.

Similar neural networks have had many wonderful, humanity-improving impacts. They can spot skin cancer early, or discover new drug treatments for the diseases of today. One main reason that voice recognition in Amazon Alexa or Google Home has suddenly gotten so good can be attributed to GANs, too.

But fundamentally, there’s a problem that happens when a machine trains another machine to create a piece of media that’s indistinguishable from the real thing. It creates content that, by design, can pass the tests of the highest possible professional scrutiny.

. . . .

“The more techniques you develop to distinguish fact from fiction, the better the fiction becomes,” explains Jon Brandt, Director of the Media Intelligence Lab at Adobe Research, and the first AI researcher the company hired 15 years ago.

At Adobe, Brandt’s job isn’t to create future products but to spearhead and coordinate the bleeding edge IP with which they’ll be infused. For more than a decade, Adobe has owed many of its improvements to AI. In 2004, it introduced its first AI feature with automatic red-eye removal. Then face tagging in 2005. Now AI powers dozens of features from cropping and image searching to lip-syncing.

For the most part, these are handy updates to Adobe’s creative tools, gradually updating the features of its apps. But knowing where my questioning was leading him–that Adobe is the biggest, most profitable company in image manipulation, and single-handedly ushered in the reality distortion field of Photoshop that we all live in today–Brandt doesn’t mince words.

“The elephant in the room is Sensei,” he says. “It’s Adobe’s name around how we are approaching AI and machine learning for our business. It is not a product–which is frequently a source of confusion–but a set of technologies.”

. . . .

Thanks to machine learning–and a small army of research interns Adobe recruits every summer to publish work at the company–Adobe can create a doppelgänger of your own voice, to make you say things you never said. It can automatically stitch togetherimaginary landscapes. It can make one photo look stylistically identical to another photo. And it can easily remove huge objects from videos as easily as you might drag and drop a file on your desktop.

These research experiments–which to be clear, haven’t been rolled out into Adobe products–aren’t just powerful media manipulators. They’re designed to be accessible, too. They don’t require years of expertise, honing your craft with artisanal tools. With the infusion of AI assisting the user, the apprentice becomes an instant master. Soon, Adobe plans to make image manipulation as simple as talking to your phone.

“It’s part of our mission to make our tools as easy, and enable our creatives to express themselves, as readily as possible,” says Brandt. “That’s developed a need for us to have more and more understanding of the context and content of what they’re working on, which requires a machine learning approach.”

. . . .

“We can do the best to promote responsible use of our tools, [and] support law enforcement when these things are being used for illegal or nefarious purposes,” he says. “But there are people out there who are going to abuse it, that’s unfortunate. We can do [some things] from a tool perspective to limit that, but at the end of the day, you can’t stop people from doing it; they can do it in their living room.”

Whether it’s anonymous Redditors, big players like Adobe, or academia itself, progress is being made through all channels on advanced audio and visual manipulation, and no one is promising to hit the brakes. So how should we reckon with a dissolving reality? We simply don’t know yet.

Link to the rest at Fast Company

.

1 thought on “The War On What’s Real”

  1. Which means making realistic covers is in the hands of everyone with a computer. Or unrealistic if that’s the way the story goes …

    MYMV and you not get conned.

Comments are closed.