Video

Life Without Google

9 February 2019

Amazon’s Super Bowl Ad

30 January 2019

Sfumato

29 January 2019
Comments Off on Sfumato

PG discovered a lovely word this morning.

Sfumato is derived from the Italian word, “sfumato” meaning shaded or toned down.

According to Wikipedia, sfumato is a painting technique for softening the transition between colours, mimicking an area beyond what the human eye is focusing on, or the out-of-focus plane.  Wikipedia points to the painting of Mona Lisa as an example, particularly around the eyes.

Here’s a bit more explanation.

.

The Flickr Blog has just released its Top 25 Photos on Flickr in 2018 From Around The World that includes several photos that feature sfumato techniques.

100% Diy: Interview with Cellist Zoë Keating

20 January 2019

From Indie Digital Media:

Zoë Keating is a cellist and composer whose music has appeared in tv shows like Breaking Bad and the Sherlock Holmes drama Elementary. She’s released several albums and EPs of her original music, and has recorded with artists such as Amanda Palmer.

All of this Keating does with a “100% DIY” approach, as she wrote in an LA Times op-ed in 2013. She owns the rights to her music and controls the distribution, by putting it up herself on iTunes and other platforms like Bandcamp.

Even better, this DIY approach has been successful. In the following interview with her, you’ll see a detailed breakdown of how much money she made from her music in 2018. In summary, she earned $20,828 from streaming and $42,229 from digital downloads and physical albums. That’s not counting revenue from concerts, licensing fees from tv and movie soundtracks, and other income.

. . . .

In her end-of-year Tumblr post, Keating outlined her streaming royalties for 2018. She had earned $12,231 from Spotify, from 2,252,293 streams. That equates to about half a cent per stream. She estimated $3,900 from Apple Music and $2,800 from Pandora, and from everything else it was two or three figure sums (including $71 from Napster, the once popular file sharing service that didn’t pay artists a penny in its prime). All up, she earned a bit over $20,000 from all streaming sources in 2018.

I asked Keating how her revenue from digital downloads and physical media (CDs and LPs) compared with the streaming royalties?

“In 2018 there were 5,024 downloads and 4,093 physical albums sold on Bandcamp,” she replied, “which after packaging, shipping, and tax netted me $28,729.”

She made a further $13,500 from 6,610 iTunes downloads (albums and songs combined) in 2018, down 11% from the previous year. This, said Keating, parallels the industry decline in digital downloads.

“iTunes download revenue has been gradually going down for me over the years,” she said.

Streaming services are of course to blame, since they’re making the act of downloading less and less common.

. . . .

This trend is also reflected in Bandcamp, the leading platform for indie musicians to sell their music online.

“Bandcamp sales are not as large as you’d think,” Keating told me. “Bandcamp does not market or advertise or receive any kind of press, unlike the major music services. When I mention Bandcamp, many listeners say ‘oh what is that?’ So my sales are from the listeners who take the trouble to visit my website, are comfortable with the concept [of buying from her site] and purchase the music directly. But people tend to use their service of choice and only a fraction buy from me directly.”

. . . .

In a recent Facebook post, Keating wrote that “the hardest problem I have as an artist these days is reaching the people who already love my music.” Considering she has 52,000 Facebook followers and 989,000 Twitter followers, this shows how hard it is for even popular indie creators to get attention on social media platforms.

Link to the rest at Indie Digital Media

How We See the World

20 January 2019

Perhaps not directly related to books and writing at first glance, but it may provide some ideas about how different characters view the world and themselves and how they may wish to craft the way others see them.

Per his personal interest in photography, PG will note that apps and programs to enhance and modify photographs have exploded since the advent of cell phone cameras. Not that long ago, Photoshop and Lightroom were pretty much the only games in town, but now modifications that would have taken a long time to get right with those expensive tools are possible in seconds with a $3.00 app.

PG apologizes for the autoplay setting. The video is in a format that didn’t provide an option to permit the viewer to decide whether/when to start the video.

.

The War On What’s Real

28 December 2018

Not exactly to do with writing, but, potentially, a basketful of writing prompts.

From Fast Company:

Nine years ago, I sat at ground zero of the face-swapping revolution.  I was inside the screening room of Ed Ulbrich, who was hot off building the technology that had transformed Brad Pitt’s visage in The Curious Case of Benjamin Button. Ulbrich had welcomed me to his VFX studio Digital Domain to preview something that could top even the technical magic of Benjamin Button. He dimmed the lights, and the opening notes of his latest opus, Tron Legacy, began to play. Soon, I was face to face with a digitally-reconstructed Jeff Bridges, who wasn’t 60 years old anymore, but a spry 30. To Ulbrich, face-swapping was the Holy Grail of special effects–and we were witnessing its realization.

. . . .

“It was really hard, it was really slow, it was really tedious, it was really expensive,” Ulbrich said at the time. “And the next time we do it it’s going to be less difficult, and less slow and less expensive.” Digital Domain eventually pivoted to resurrecting Tupac as a hologram, and later declared bankruptcy. But the company left its mark: digital face-swapping has become a mainstay tool of Hollywood, putting Arnold Schwarzenegger in Terminator Genesis and Carrie Fisher in Star Wars: The Force Awakens.

The jaw-dropping effect is still fairly difficult and expensive, though–or it was until an anonymous Redditor named Deepfakes changed all that overnight and brought Ulbrich’s words back to me with perfect clarity. In early 2018, Deepfakes released a unique bit of code to the public that allows anyone to easily and convincingly map a face onto someone else’s head in full motion video. Then, another Redditor quickly created FakeApp, which gave the Deepfakes scripts a user-friendly front end.

. . . .

Finally, 2016 brought the meteoric rise of automated machine learning–systems that could improve upon themselves–which Deepfakes refers to as “steroids” on all the science that came before.

. . . .

The way Deepfakes’ tech is based on a relatively typical Generative Adversarial Network (GAN)–the sort used by basically every AI researcher in the world. Don’t think of a GAN as one AI but two, each vying to be the teacher’s pet. One AI, called the generator, draws faces while the other AI, called the discriminator, critiques them by trying to spot fakes. Repeating this process millions of times, both AIs continue to improve and make one another better in a battle that can be waged as long as your computer is plugged in. Deepfakes’ results are pretty convincing in about half a day of computing on a low-end PC.

Similar neural networks have had many wonderful, humanity-improving impacts. They can spot skin cancer early, or discover new drug treatments for the diseases of today. One main reason that voice recognition in Amazon Alexa or Google Home has suddenly gotten so good can be attributed to GANs, too.

But fundamentally, there’s a problem that happens when a machine trains another machine to create a piece of media that’s indistinguishable from the real thing. It creates content that, by design, can pass the tests of the highest possible professional scrutiny.

. . . .

“The more techniques you develop to distinguish fact from fiction, the better the fiction becomes,” explains Jon Brandt, Director of the Media Intelligence Lab at Adobe Research, and the first AI researcher the company hired 15 years ago.

At Adobe, Brandt’s job isn’t to create future products but to spearhead and coordinate the bleeding edge IP with which they’ll be infused. For more than a decade, Adobe has owed many of its improvements to AI. In 2004, it introduced its first AI feature with automatic red-eye removal. Then face tagging in 2005. Now AI powers dozens of features from cropping and image searching to lip-syncing.

For the most part, these are handy updates to Adobe’s creative tools, gradually updating the features of its apps. But knowing where my questioning was leading him–that Adobe is the biggest, most profitable company in image manipulation, and single-handedly ushered in the reality distortion field of Photoshop that we all live in today–Brandt doesn’t mince words.

“The elephant in the room is Sensei,” he says. “It’s Adobe’s name around how we are approaching AI and machine learning for our business. It is not a product–which is frequently a source of confusion–but a set of technologies.”

. . . .

Thanks to machine learning–and a small army of research interns Adobe recruits every summer to publish work at the company–Adobe can create a doppelgänger of your own voice, to make you say things you never said. It can automatically stitch togetherimaginary landscapes. It can make one photo look stylistically identical to another photo. And it can easily remove huge objects from videos as easily as you might drag and drop a file on your desktop.

These research experiments–which to be clear, haven’t been rolled out into Adobe products–aren’t just powerful media manipulators. They’re designed to be accessible, too. They don’t require years of expertise, honing your craft with artisanal tools. With the infusion of AI assisting the user, the apprentice becomes an instant master. Soon, Adobe plans to make image manipulation as simple as talking to your phone.

“It’s part of our mission to make our tools as easy, and enable our creatives to express themselves, as readily as possible,” says Brandt. “That’s developed a need for us to have more and more understanding of the context and content of what they’re working on, which requires a machine learning approach.”

. . . .

“We can do the best to promote responsible use of our tools, [and] support law enforcement when these things are being used for illegal or nefarious purposes,” he says. “But there are people out there who are going to abuse it, that’s unfortunate. We can do [some things] from a tool perspective to limit that, but at the end of the day, you can’t stop people from doing it; they can do it in their living room.”

Whether it’s anonymous Redditors, big players like Adobe, or academia itself, progress is being made through all channels on advanced audio and visual manipulation, and no one is promising to hit the brakes. So how should we reckon with a dissolving reality? We simply don’t know yet.

Link to the rest at Fast Company

.

Does AI Enhance Creativity?

22 December 2018

From Forbes:

The sophistication of artificial intelligence (AI) software is giving rise to a healthy debate about human creativity v machine creativity.

Whilst there is general agreement that AI will eventually take over many task orientated jobs, there is skepticism over whether occupations that require high creative intelligence will become automated.

. . . .

One of the main benefits of AI is saving time on mundane tasks. Working with a machine can ease the workload for creatives and allow them more time for strategic and creative thinking. But it’s more than just a time saver. From providing data insight to enable marketers to better understand their consumers to ideating and iterating basic ideas to aid the creative process, AI can provide valuable support.

. . . .

AI is also a powerful tool and partner for musicians. There is an entire industry built around AI services for creating music. Big players like Google and Spotify are all getting a piece of the action. Many of the systems work by using deep learning, a type of AI that’s reliant on analyzing multiple layers of data. Dance anthems through to pop classics can be analyzed for their chords, tempo, length etc so the software can determine patterns and create music. AI platform Amper’s co-founder, Michael Hobe, says “It’s more of intelligence augmentation. For me, it’s allowing more people to be creative and then allowing the people who already have some of these creative aspects to really further themselves.”

. . . .

Recently, The & Partnership London turned to tech company Visual Voice to build an AI platform that could write the next Lexus ad script, with visual recognition support from IBM Watson. The first step was to feed the machine the right information. The AI was trained with Cannes Lions-winning car and luxury advertising – 15 years worth –  to find trends associated with acclaimed advertising and it was taught to be intuitive. This was done through drawing on emotional intelligence data and via a study conducted in partnership with applied scientists which explored intuition.

. . . .

Google’s AI boutique DeepMind is developing an AI with imagination. It’s this distinctly human ability to construct a plan, to see the consequences of actions before they are made, that could really shake things up.

Link to the rest at Forbes

Here’s the Lexus ad created entirely by artificial intelligence:
.

Home Alone with Google Assistant

21 December 2018

« Previous PageNext Page »