Did a Person Write This Headline, or a Machine?

This content has been archived. It may no longer be accurate or relevant.

From Wired:

The tech industry pays programmers handsomely to tap the right keys in the right order, but earlier this month entrepreneur Sharif Shameem tested an alternative way to write code.

First he wrote a short description of a simple app to add items to a to-do list and check them off once completed. Then he submitted it to an artificial intelligence system called GPT-3 that has digested large swaths of the web, including coding tutorials. Seconds later, the system spat out functioning code. “I got chills down my spine,” says Shameem. “I was like, ‘Woah something is different.’”

GPT-3, created by research lab OpenAI, is provoking chills across Silicon Valley. The company launched the service in beta last month and has gradually widened access. In the past week, the service went viral among entrepreneurs and investors, who excitedly took to Twitter to share and discuss results from prodding GPT-3 to generate memes, poems, tweets, and guitar tabs.

The software’s viral moment is an experiment in what happens when new artificial intelligence research is packaged and placed in the hands of people who are tech-savvy but not AI experts. OpenAI’s system has been tested and feted in ways it didn’t expect. The results show the technology’s potential usefulness but also its limitations—and how it can lead people astray.

. . . .

Other experiments have explored more creative terrain. Denver entrepreneur Elliot Turner found that GPT-3 can rephrase rude comments into polite ones—or vice versa to insert insults. An independent researcher known as Gwern Branwen generated a trove of literary GPT-3 content, including pastiches of Harry Potter in the styles of Ernest Hemingway and Jane Austen. It is a truth universally acknowledged that a broken Harry is in want of a book—or so says GPT-3 before going on to reference the magical bookstore in Diagon Alley.

Have we just witnessed a quantum leap in artificial intelligence? When WIRED prompted GPT-3 with questions about why it has so entranced the tech community, this was one of its responses:

“I spoke with a very special person whose name is not relevant at this time, and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.”

The response encapsulated two of the system’s most notable features: GPT-3 can generate impressively fluid text, but it is often unmoored from reality.

. . . .

When a WIRED reporter generated his own obituary using examples from a newspaper as prompts, GPT-3 reliably repeated the format and combined true details like past employers with fabrications like a deadly climbing accident and the names of surviving family members. It was surprisingly moving to read that one died at the (future) age of 47 and was considered “well-liked, hard-working, and highly respected in his field.”

. . . .

Francis Jervis, founder of Augrented, which helps tenants research prospective landlords, has started experimenting with using GPT-3 to summarize legal notices or other sources in plain English to help tenants defend their rights. The results have been promising, although he plans to have an attorney review output before using it, and says entrepreneurs still have much to learn about how to constrain GPT-3’s broad capabilities into a reliable component of a business.

More certain, Jervis says, is that GPT-3 will keep generating fodder for fun tweets. He’s been prompting it to describe art house movies that don’t exist, such as a documentary in which “werner herzog [sic] must bribe his prison guards with wild german ferret meat and cigarettes.” “The sheer Freudian quality of some of the outputs is astounding,” Jervis says. “I keep dissolving into uncontrollable giggles.”

Link to the rest at Wired