From Jane Friedman:
Reliably, every year or so, you’ll see headlines about new research that claims author incomes are on the decline. The most recent is from the Authors’ Licensing and Collecting Society (ALCS), a British nonprofit run by writers for writers.
. . . .
Before I continue with this epic-length post, here’s the short version: I don’t trust these surveys’ results and I question their usefulness in improving the fortunes of writers. Too often it feels like promotion of a self-interested narrative from writers’ organizations, with the outcome boring and predictable: There is media coverage that claims writers’ incomes are plummeting, a few big-name authors come out and try to shame publishers or even society for not valuing writers properly, debate ensues, then everyone gets back to work—until a new study emerges.
This latest gnashing of teeth has motivated me to finally write a comprehensive post about why these reports are so frustrating, in the hopes more people will ask critical questions and notice their flaws. In the long run, I hope organizations will either reassess how these studies get done, or focus on more useful support of professional authors. However, a brief side note for industry insiders: For the purposes of this article, I’m setting aside the fact this research may be done mainly to support arguments and legislation for strengthening and protecting of authors’ copyright and thus (presumably) their earnings potential. Frankly, I don’t think weak copyright law is the problem, and I believe such efforts have little effect on the average author. My thinking on copyright aligns with what you might hear from Cory Doctorow—but that’s a post for another day.
When considering author-income research, my concerns fall into these areas.
- How reliable is the research?
- What other data points do we have?
- If the data is directionally correct, then why are incomes declining?
- Does the research tell us something meaningful or useful about how writers earn a living?
Link to the rest at Jane Friedman
Back in the dawn of the modern era, PG graduated from college. His favorite professor told him he needed to find a real job and sent him to the university placement center. Lo and behold, a couple of weeks later, he had a real job. For more than minimum wage.
That job was as an economic and marketing research analyst for a large financial services company. Suffice to say, PG had a lot to learn as his formal education lacked any mention of economic and marketing research.
One of the things PG learned is that most surveys of this or that group are garbage. The reasons are simple. If you wish to learn something real about a large group of people, you have two choices:
- Query every single person in the group in a way that will result in an accurate answer.
- Carefully select a truly random sample that accurately represents the group and query them in a way that will result in an accurate answer.
Both approaches are expensive and require a lot of work – by people who know how to create questions that will result in accurate answers. Real random samples assume you understand the constituent qualities of the entire group very well and are able to obtain answers from respondents that include appropriate numbers of those with each of the constituent qualities. (If your group contains 50% women and 50% men, you may be in trouble if 80% of responses are from women. Ditto if your group contains 50% native English speakers and 50% non-native English speakers or if your group contains 50% who earn less than $10,000 per year and 50% who earn more than $10,000 per year.)
The surveying process can go wrong in many different ways. These ways include:
- Obtaining answers from people who aren’t part of the target group.
- Failing to obtain significant numbers of answers from people who are part of the group.
- Obtaining answers to ambiguous questions.
- Asking questions that are likely to generate answers that are guesses.
- Using the cheapest method of obtaining some sort of responses instead of the best method of obtaining accurate answers.
Quite often, it is useful to conduct some preparatory research (usually by talking to people) to understand how they view the subject of the research and to help map the types of responses people in the target demographic are likely to give when asked about the subject generally or specific sub-parts of the subject. If you create a multiple-choice question that doesn’t include all the meaningful alternative answers for the target audience, that’s one way the survey can fail and result in inaccurate results.
Here are a couple of example of bad multiple choice questions:
You indicated that you eat at Burger King less than once every month. Why don’t you eat at Burger King more often? (choose one answer):
- There are no Burger King restaurants near my house
- I became sick after eating at a Burger King
- I’ve never eaten at Burger King
What is your overall opinion of the Authors’ Licensing and Collecting Society?
- Pretty good
- The Best Ever
The OP discusses other problems with 99.999% of surveys of author income. In short, poor surveys of author income are probably less useful than collecting anecdotal evidence of author income (in part because people are likely to be suspicious of anectodal evidence).