Audience-ology

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

When it comes to motion-picture exposés, who wouldn’t want to read about a secretive place “where famous directors are reduced to tears and multimillionaire actors reduced to fits of rage”? This image may evoke a Hollywood Babylon, but Kevin Goetz is describing ordinary movie theaters where audience test-screenings are conducted. In “Audience-ology: How Moviegoers Shape the Films We Love,” Mr. Goetz lays bare the ins and outs of survey questionnaires, demographics and psychographics, biometric wristbands, and night-vision cameras.

“Audience-ology” is an informal, entertaining brief for movie testing as an expression of effective audience empowerment. Mr. Goetz has been involved in audience-survey research for more than 30 years; in 2010 he formed Screen Engine/ASI, which currently conducts most of Hollywood’s test screenings. His experience leads him to embrace the wisdom of crowds, at least when it comes to films: “Not only do audiences know what a good movie is,” he tells us; “sometimes, they know better than the filmmakers themselves and the executives charged with shepherding the films to the screen.”

One might reasonably assume that Mr. Goetz has a vested interest in defending audience research, and he is certainly not shy about promoting his services. (“I’m lucky to have a company filled with experienced and dedicated professionals who can handle these complex projects,” etc.) Yet he supports his populist faith with numerous examples, spanning from the silent era to the present. He includes interviews with notable producers, directors and actors—among them, Ron Howard, Drew Barrymore and Richard Zanuck—who relate their experiences and insights. (There are limits to what the author can discuss about his own work, however, due to nondisclosure agreements.)

The practice of movie testing originated, we are told, around 1919, when the slapstick comedian Harold Lloyd honed his pratfalls based on audience reactions. By the 1930s, test screenings were extended to other genres, but the research process was still more intuitive than analytic, focusing on ad hoc questions and audience monitoring. During the 1970s, in-depth research methods became desirable as the costs and profits of movies grew. It was not until the new century, however, that Hollywood adopted the popular mantra of data-driven decisions: Testing now became “a protocol.”

This mandate amplified the significance of survey results. A film’s potential profitability is forecast by two scores: the rankings of overall quality (from “excellent” to “poor”) and the likelihood that viewers would recommend the movie to others. Studio suits and film creatives alike are understandably anxious as they huddle in the back rows during a screening, scrutinizing an audience’s spontaneous reactions, and later as they watch the focus-group interviews. Money, art and careers depend on the discernment, if not the kindness, of strangers.

Mr. Goetz demonstrates that testing often improves films in ways large and small. Audiences are adept at evaluating a movie in terms of its core elements, such as plot cohesion, character plausibility and thematic focus. Filmmakers sometimes lose sight of these while laboring on a detailed and lengthy production; audience feedback redirects them to the flaws that become obvious in retrospect.

In the case of “Thelma & Louise” (1991), for example, screeners loved everything about the movie except the original ending, which briefly showed the couple cruising down a road after they had driven off a cliff. Ridley Scott, the film’s director, intended the coda to be a visual metaphor for the pair’s spiritual partnership. Viewers considered it inauthentic and downgraded their scores. The coda was excised, the test scores rose and the ending has since become iconic.

Beginnings matter as well. An early version of “La La Land” (2016) lacked any songs during its first 12 minutes. Audiences settled in for what they assumed was a romantic comedy and found it jarring when the film unexpectedly became a musical. Fortunately the filmmakers had already shot a bravura song-and-dance sequence set on a Los Angeles freeway, which had been cut because it didn’t advance the plot. “Another Day of Sun” now starts the film, sounding the necessary keynote.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

PG wonders if anyone has ever heard about a publisher conducting this sort of research.

4 thoughts on “Audience-ology”

  1. … PG wonders if anyone has ever heard about a publisher conducting this sort of research.

    Well, we Indie author/publishers do something similar with beta readers. This type of “screener” can make very helpful suggestions before the book is published. For example, in my last couple of books, I have a Neanderthal character named Pook. But I first named her Puk in the drafts. Because of my background, I was phoneme-pronouncing her original name in my head as “pook.” But a beta reader objected to naming a young female character with a name that sounded close to Shakespeare’s Puck, or even worse, the act of vomiting. Which had never occurred to me. So Puk became Pook. Audience-ology at work.

    P.S. Having 100s of beta readers would be better than 5 (data-wise), but 1 in 5 is still 20%. 😉

  2. I have to admit that I haven’t heard of any publisher doing so, PG.

    But I’m not sure that the medias are similar enough for such. Two things that I can think of, right off the bat: the test screening gets the studio the immediate impressions of the film, while it is still completely fresh in the minds of the audience. Very hard to get that with books. The “test media” is also under the complete control of the studio. They are far less likely to have a pirate version pop up at or before release, that, to make things worse, is the “not good” product. Again, harder to accomplish with books.

    • Those are good points as well, WO.

      That said, I think I could design a pre-release test reading test to either eliminate or minimize anything showing up in public prior to release.

      A couple of thoughts come immediately to mind:

      1. Offer a $250 bonus for every test reader who returns their book. (I think I might stay away from ebooks for test readers, but see the following.)
      2. If ebooks were used for test readers, I would put a unique “spike” in the text of each book, something unique in the text that would tell me exactly which test reader allowed the ebook into the wild. Only a computer document comparison between two ebook files would disclose the differences in the two.

Comments are closed.