Disruptive Innovation

How AI is Learning to Play with Words

14 September 2019

From ReadWrite:

Imagine you go to a bookstore, and you notice and exciting cover. You pick the book, read the summary at the back, and the rave reviews. The plot seems intriguing enough, but when you check for the writer, it says “ by AI-something.” Would you buy the book, or would you think that was a waste of money? We will have those decisions moving into the future, and who will be responsible for such writings? But, that shows how AI is learning to play with words.

You may as well decide now if you will purchase content written by AI. That’s what the future will bring — AI is learning to play with words.

All of us have gotten used to chatbots and their limited capacity, but it appears their boundaries will be surpassed. Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least. Its latest achievement was creating counterarguments and discussions with the researchers.

The program was fed a variety of articles, blogs, websites, and other content from the internet. Surprisingly, it managed to produce an essay worthy of any reputable writing service, and on a particularly challenging topic, by the way (Why Recycling Is Bad for the World).

Did the researchers do anything to help the program by providing specific, additional input? Certainly not. GPT-2, OpenAI’s new algorithm, did everything on its own. It excelled in different tests, such as storytelling, and predicting the next line in a sentence. Admittedly, it’s still far from inventing an utterly gripping story from beginning to the end as it tends to stray off topic — but it has great potential.

What sets GPT-2 apart from other similar AI programs is its versatility. Typically, such programs are skilled only for certain areas and can complete only specific tasks. However, this AI language model uses its input and successfully deals with a variety of topics.

Link to the rest at ReadWrite

Communication Re-Imagined with Emotion Ai

15 July 2019

From ReadWrite:

There has long been a chasm between what we perceive artificial intelligence to be and what it can actually do. Our films, literature, and video game representations of “intelligent machines,” depict AI as detached but highly intuitive interfaces. We will find communication re-imagined with emotion AI.

. . . .

As these artificial systems are being integrated into our commerce, entertainment, and logistics networks, we are witnessing emotional intelligence. These smarter systems have a better understanding of how humans feeland why they feel that way.

The result is a “re-imagining” of how people and businesses can communicate and operate. These smart systems are drastically improving the voice user interface of voice-activated systems in our homes. AI is improving not only facial recognition but changing what is done with that data.

. . . .

Humans use thousands of subverbal cues when they communicate. The tone of their voice, the speed at which someone speaks– these are all hugely important parts of a conversation but aren’t part of the “raw data” of that conversation.

New systems designed to measure these verbal interactions are now able to look at emotions like anger, fear, sadness, happiness, or surprise based on dozens of metrics related to specific cues and expressions. Algorithms are being trained to evaluate the minutia of speech in relation to one another, building a map of how we read each other in social situations.

Systems are increasingly able to analyze the subtext of language based on the tone, volume, speed, or clarity of what is being said. Not only does this help these systems to identify the gender and age of the speaker better, but they are growing increasingly sophisticated in recognizing when someone is excited, worried, sad, angry, or tired. While real-time integration of these systems is still in development, voice analysis algorithms are better able to identify critical concerns and emotions as they get smarter.

. . . .

The result of this is a striking uptick in the ability of artificial intelligence to replicate a fundamental human behavior. We have Alexa developers actively working to teach the voice assistant to hold conversations that recognize emotional distress, the US Government using tone detection technology to detect the symptoms and signs of PTSD in active duty soldiers and veterans and increasingly advanced research into the impact of specific physical ailments like Parkinson’s on someone’s voice.

While done at a small scale, it shows that the data behind someone’s outward expression of emotion can be cataloged and used to evaluate their current mood.

Link to the rest at ReadWrite

Barnes & Noble Takeover Shows Retail Theme Is Technology Change

21 June 2019

From Seeking Alpha:

  • Takeover of Barnes & Noble highlights the importance of technology change in media retailing.
  • Lessons from Borders and Blockbuster bankruptcies are still relevant.
  • Loyal customer base supports ongoing Barnes & Noble mall presence.

Barnes & Noble, largest US book retailer with a total of 620 stores, announced plans this month to be acquired by Elliott Management (a $34 billion New York private equity hedge fund) for $683 million (including transfer of debt),

. . . .

The important benefit of this takeover for Barnes & Noble shareholders (as well as Barnes & Noble’s landlords, the Retail REITs) is that this is a takeover in anticipation of a turnaround. Elliott Management also owns UK book retailer Waterstones and plans to put Waterstones successful CEO, John Daunt, in charge of both companies. It appears that Barnes & Noble has found a good home.

With 627 Barnes & Noble stores in the US and 280 Waterstones locations in UK, Elliott Management is facing off against Amazon, online juggernaut that is believed to sell as much as 50% of all new hard copy books as well as a large share of e-books and used books. Barnes & Noble has a successful website allowing loyal customers to purchase books, movies, music, toys, and games, but cannot compete with Amazon in size or selection, customer history or ability to take advantage of cross-selling and financing opportunities.

Still, Barnes & Noble knows their customer base well, having used loyalty programs to reach out to their frequent shoppers and should be able to take advantage of their friendly environment for book lovers at well-established stores. I think we won’t see many Barnes & Noble stores close, at least not at first; we are far more likely to see discounting and special offers at Barnes & Noble. Customers should feel upgraded.

. . . .

Although the greatest threat to Barnes & Noble’s future remains Amazon (both for online sales of hard copy books and e-books sold on Nook), I think the true threat is technology change, as we have seen over the past 12 years of change in the way media is delivered and consumed by today’s shoppers. These 2 retail failures – Blockbuster Video and Borders – still have something to tell us about current retail challenges.

Link to the rest at Seeking Alpha

The Rise of Robot Authors: Is the Writing on the Wall for Human Novelists?

26 March 2019

From The Guardian:

Will androids write novels about electric sheep? The dream, or nightmare, of totally machine-generated prose seemed to have come one step closer with the recent announcement of an artificial intelligence that could produce, all by itself, plausible news stories or fiction. It was the brainchild of OpenAI – a nonprofit lab backed by Elon Musk and other tech entrepreneurs – which slyly alarmed the literati by announcing that the AI (called GPT2) was too dangerous for them to release into the wild, because it could be employed to create “deepfakes for text”. “Due to our concerns about malicious applications of the technology,” they said, “we are not releasing the trained model.” Are machine-learning entities going to be the new weapons of information terrorism, or will they just put humble midlist novelists out of business?

. . . .

GPT2 is just using methods of statistical analysis, trained on huge amounts of human-written text – 40GB of web pages, in this case, that received recommendations from Reddit readers – to predict what ought to come next. This probabilistic approach is how Google Translate works, and also the method behind Gmail’s automatic replies (“OK.” “See you then.” “That’s fine!”) It can be eerily good, but it is not as intelligent as, say, a bee.

Right now, novelists don’t seem to have much to fear. Fed the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – the machine continued the narrative as follows: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

. . . .

Did the AI do any better with Jane Austen? The opening phrase of Pride and Prejudice – “It is a truth universally acknowledged” – provoked the machine to gabble on: “that when a nation is in a condition of civilization, that it is in a great measure the business of its leaders to encourage the habits of virtue, and of industry, and of good order among its people.” This does sound rather like some 19th-century political bloviator, even if a slightly broken version. (The second “that” is redundant, and it should read “in great measure” without the indefinite article.)

. . . .

Is there greater cause to worry further down the literary food chain? There have for a while already been “AI bots” that can, we hear, “write” news stories. All these are, though, are giant automated plagiarism machines that mash together bits of news stories written by human beings. As so often, what is promoted as a magical technological advance depends on appropriating the labour of humans, rendered invisible by AI rhetoric. When a human writer commits plagiarism, that is a serious matter. But when humans get together and write a computer program that commits plagiarism, that is progress.

. . . .

The makers’ announcement that this program is too dangerous to be released is excellent PR, then, but hardly persuasive. Such code, OpenAI warns, could be used to “generate misleading news articles”, but there is no shortage of made-up news written by actual humans working for troll factories. The point of the term “deepfakes” is that they are fakes that go deeper than prose, which anyone can fake. Much more dangerous than disinformation clumsily written by a computer are the real “deepfakes” in visual media that respectable researchers are eagerly working on right now. When video of any kind can be generated that is indistinguishable from real documentary evidence – so that a public figure, for example, can be made to say words they never said – then we’ll be in a world of trouble.

. . . .

Perhaps a more realistic hope for a text-only program such as GPT2, meanwhile, is simply as a kind of automated amanuensis that can come up with a messy first draft of a tedious business report – or, why not, of an airport thriller about famous symbologists caught up in perilous global conspiracy theories alongside lissome young women half their age. There is, after all, a long history of desperate artists trying rule-based ruses to generate the elusive raw material that they can then edit and polish. The “musical dice game” attributed to Mozart enabled fragments to be combined to generate innumerable different waltzes, while the total serialism of mid-20th‑century music was an algorithmic approach that attempted as far as possible to offload aesthetic judgments by the composer on to a system of mathematical manipulations.

. . . .

But until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories. And if one day they could, would we even be able to follow them? As Wittgenstein observed: “If a lion could speak, we would not understand him”. Being a lion in the world is (presumably) so different from being a human in the world that there might be no points of mutual comprehension at all. It’s entirely possible, too, that if a conscious machine could speak, we wouldn’t understand it either.

Link to the rest at The Guardian

PG says, “We have a lot of rain in June. Is the buzz dead better than the couple? The maddening kill crawls into the wealthy box. When does the zesty liquid critique the representative?”

(PG’s comments are courtesy of Random Word Generator, TextFixer and Word Generator.

And also:

After leaving the crumpled planet Abydos, a group of girls fly toward a distant speck. The speck gradually resolves into a contented, space tower.

Civil war strikes the galaxy, which is ruled by Brad Willis, a derelict wizard capable of lust and even murder.

Terrified, an enchanted alien known as Michelle Thornton flees the Empire, with her protector, Chloe Noris.

They head for Philadelphia on the planet Saturn. When they finally arrive, a fight breaks out. Noris uses her giant knife to defend Michelle.

(Plot Generator)

Finally, a blurb for a romance novel:

In this story, a serene police chief ends up on the run with a realistic witch-hunter. What starts as professional courtesy unexpectedly turns into a passionate affair.

(Seventh Sanctum)

Click here, then click the Play button to listen to the blurb

(Natural Readers)

Our Software Is Biased like We Are. Can New Laws Change That?

24 March 2019

From The Wall Street Journal:

Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.

Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.

The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.

An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.

. . . .

No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.

As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.

“Some people talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state,” says Cathy O’Neil, a former Wall Street quant turned self-described algorithm auditor, who wrote the book “Weapons of Math Destruction.”

“There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?” she adds.

. . . .

An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.

The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. Ms. Richardson says panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law.

Link to the rest at The Wall Street Journal

Boom Time for Used Booksellers?

19 February 2019

As PG was opening a couple of packages of hardcopy books for Mrs. PG (she does read a lot of ebooks, but, in some cases, used books are less expensive and some books she wants in hardcopy to share with family and/or friends), it occurred to him that Amazon has almost certainly given used booksellers an opportunity to reach a far wider group of prospective purchasers than were ever available to them in physical used bookstores.

Most of the hardcopy used books that arrive in the mail come well-packaged and most are clearly packed by more sophisticated equipment than a roll of stamps and a stack of envelopes.

So, is PG correct about Amazon and used booksellers?

Has the ability to sell to a much wider online audience affected the pricing of used books?

Has the used book business undergone consolidation with small used bookstores closing and selling their inventory to large, online-focused used booksellers?

Are there people who are paid by larger used booksellers to be scouts for large quantities of available used books?

Real-Time Continuous Transcription with Live Transcribe

5 February 2019

Not necessarily to do with books, but two of PG’s offspring are hearing-impaired, so he follows topics like this. He’s also interested in developments in artificial intelligence, so it’s a double win for him.

From The Google AI Blog:

The World Health Organization (WHO) estimates that there are 466 million people globally that are deaf and hard of hearing. A crucial technology in empowering communication and inclusive access to the world’s information to this population is automatic speech recognition (ASR), which enables computers to detect audible languages and transcribe them into text for reading. Google’s ASR is behind automated captions in Youtube, presentations in Slides and also phone calls. However, while ASR has seen multiple improvements in the past couple of years, the deaf and hard of hearing still mainly rely on manual-transcription services like CART in the US, Palantypist in the UK, or STTRin other countries. These services can be prohibitively expensive and often require to be scheduled far in advance, diminishing the opportunities for the deaf and hard of hearing to participate in impromptu conversations as well as social occasions. We believe that technology can bridge this gap and empower this community.

Today, we’re announcing Live Transcribe, a free Android service that makes real-world conversations more accessible by bringing the power of automatic captioning into everyday, conversational use. Powered by Google Cloud, Live Transcribe captions conversations in real-time, supporting over 70 languages and more than 80% of the world’s population. You can launch it with a single tap from within any app, directly from the accessibility icon on the system tray.

Link to the rest at The Google AI Blog

 

The Future of Music, Where Middlemen Have Met Their Match

4 January 2019

From OZY:

“Hey, Dad. I want to show you a song.”

The speaker was my 16-year-old daughter. Music for her? Primarily visual and to be enjoyed in video clips. Video clips that did not always feature videos. Sometimes it was just some clip art and the music. But no record store, no record album, no tape — reel-to-reel, eight track, cassette or otherwise — and finally no compact disc. And she’s not alone in how she’s digging on the music she digs on.

According to Nielsen’s music report, digital and physical album sales declined (again) last year — from about 205 million in 2016 to 169 million copies in 2017 — down 17 percent. Over the past five years, right up to Nielsen’s mid-year report, sales had fallen by roughly 75 percent. That decline is coinciding with a streaming juggernaut that continues to grow. How much so? Last year streaming skated, quite easily, beyond 400 billion streams. You include video streams and you have figures over $618 billion. You look back at the year before and you see a 58 percent increase in audio streams.

While this buoyed the damned-near-moribund music industry to the tune of 12.5 percent growth from 2016 to last year, the music business is now, as it has been, all about discovering the music that can generate all of those streams. And that’s where things get curious because record labels that are used to creating heat now have to go places where the heat is being created to stay viable and vibrant.

. . . .

With a number of presently high-profile artists — Odd Future, Lil Yachty, Post Malone, etc. — being “discovered” on places like SoundCloud over the past five years, entire communities of music fans can beat both the hype and the Spotify/Pandora/SiriusXM radio/Amazon algorithms that suggest if you liked this, you might also like that, by starting there, and branching out. First stop: Instagram.

“People come in all the time and play me stuff from their IG feeds,” says Mark Thompson, founder of Los Angeles-based Vacation Vinyl (that sells, yes, primarily vinyl). “So I’m hearing bands that it soon becomes pretty clear have no label, no representation, nothing but an IG feed and maybe some music recorded on their laptops.”

To put this in perspective, in July 2018, Instagram added the music mode in Stories, and just that quickly streaming started to feel … old. Because from the musicians’ mouths to our ears, unmediated music finds its way from the creator to the consumer. Spotify is trying to adapt too — it has over the past year begun to sign deals with independent musicians to give them access to the platform.

. . . .

“It’s free,” she says, having endured speeches about listening to unpaid/stolen music. Since she and her friends don’t ever listen to more than 60 seconds of any song, at least while I am around, this raises the question: Is it a business and is it sustainable in the same way that Apple Music, Tidal, Deezer or iHeartRadio have managed to be?

“Unknown,” says former promoter and music industry executive Mark Weiss. “But the business is where the ears are. And if the business is any damn good it’ll figure out how to stay in the conversation.”

. . . .

Flash-forward to record contracts from the mid-1990s that covered cassette tapes, vinyl, compact discs and “future technologies not yet known.” The digitization of analog music had already changed the landscape for everything from crime to interior design.

Whereas previously you’d have needed a turntable, an amplifier, maybe a preamp, a tape player, a receiver, speakers and a subwoofer to listen to the music that you’d be playing off of tapes, vinyl or CDs, after everything was digitized you just needed a phone and speakers.

Link to the rest at OZY

Next Page »