Definitely not about writing directly, but quite possibly a wonderful story about human nature, which forms the basis of many works of fiction and is present at all times, everywhere.
From The Economist:
IF YOU WRITE a book called “The Honest Truth About Dishonesty”, the last thing you want to be associated with is fraud. Yet this is where Dan Ariely, a behavioural economist at Duke University, finds himself, along with his four co-authors of an influential study about lying.
In 2012 Mr Ariely, along with Max Bazerman, Francesca Gino, Nina Mazar and Lisa Shu, published a study on how to nudge people to be more honest. They concluded that when asked to affirm that information is truthful before they give it, rather than afterwards, people are more likely to be honest. The results stemmed from three experiments: two conducted in a laboratory (led by Mr Bazerman, Ms Gino and Ms Shu), and a third based on data from a car-insurance company (led by Mr Ariely and Ms Mazar).
Several researchers have tried and failed to replicate the results from the laboratory tests. But it is the car insurance study which is driving the most serious doubts. It asked policyholders to self-report the number of miles they had driven. Customers were asked to sign a statement on the reporting form which said, “I promise that the information I am providing is true”; half of the forms had this declaration at the top, half had it at the bottom. All of the car-owners had previously reported their odometer readings to the insurance company, giving a baseline for the data (the time elapsed between the baseline readings and the experiment varied for each customer). Mr Ariely and Ms Mazar found that when customers were asked to sign the statement at the top of the form, there was a 10.25% increase in the number of self-reported miles, compared with the miles reported on forms where the statement was signed at the bottom. The more miles a car has driven, the more expensive the insurance will be. The researchers concluded that signing the truthfulness statement at the top of the form resulted in people being more honest (and thus on the hook for higher insurance premiums).
With over 400 citations on Google Scholar, these findings have spread far and wide. But on August 17th Leif Nelson, Joe Simmons and Uri Simonsohn, who run a blog called Data Colada, published an article, based on the work of a group of anonymous researchers, dissecting what they believe to be evidence of fraud. There are several eyebrow-raising concerns, although two in particular stand out: the number of miles reported by the policyholders, and the way in which the numbers were supposedly recorded.
In a random sample of cars, one would expect the number of miles driven by each vehicle to follow a bell-shaped curve (such as a “normal distribution”). Some cars are driven a lot, some are barely driven, but most fall somewhere in between these extremes. But in the experiment from 2012, the number of miles driven follows a uniform distribution: just as many cars drove under 10,000 miles as drove between 40,000 and 50,000 miles, and not a single car drove more than 50,000 miles. Messrs Nelson, Simmons and Simonsohn suggest that a random number generator was used to add between zero and 50,000 to original readings submitted by the customers.
The random number generator theory is backed by the second problem with the data. Many people, when asked to write down big numbers, round to the nearest ten, hundred or thousand. This can be seen in the data for the original odometer readings: nearly 25% of the mileages end in a zero. But in the experiment, each digit between zero and nine is equally represented in the final digit of the mileage reports. Humans tend to round numbers, but random generators don’t.
All five members of the original research group admit that the data in their study were fabricated. But all say they were duped rather than dishonest. “We began our collaboration from a place of assumed trust—rather than earned trust,” said Ms Shu, on Twitter. However, she declined to comment further to The Economist. Mr Ariely’s name is listed as the creator of the Excel spreadsheet containing the original data. But he says he has no recollection of the format of the data he received, speculating that he might have copied and pasted data sent to him into the spreadsheet. One explanation is that the insurance company, or a third party that collected data on its behalf, falsified the numbers. The Hartford, the Connecticut-based insurance company that allegedly provided data for the experiment, could not be reached for comment. Mr Ariely has requested that the study be retracted, as have some of his co-authors. And he is steadfast that his mistake was honest. “I did not fabricate the data,” he insists. “I am willing to do a lie detection test on that.”
Link to the rest at The Economist
From Buzzfeed News:
The paper also bolstered the reputations of two of its authors — Max Bazerman, a professor of business administration at Harvard Business School, and Dan Ariely, a psychologist and behavioral economist at Duke University — as leaders in the study of decision-making, irrationality, and unethical behavior. Ariely, a frequent TED Talk speaker and a Wall Street Journal advice columnist, cited the study in lectures and in his New York Times bestseller The (Honest) Truth About Dishonesty: How We Lie to Everyone — Especially Ourselves.
Years later, he and his coauthors found that follow-up experiments did not show the same reduction in dishonest behavior. But more recently, a group of outside sleuths scrutinized the original paper’s underlying data and stumbled upon a bigger problem: One of its main experiments was faked “beyond any shadow of a doubt,” three academics wrote in a post on their blog, Data Colada, on Tuesday.
The researchers who published the study all agree that its data appear to be fraudulent and have requested that the journal, the Proceedings of the National Academy of Sciences, retract it. But it’s still unclear who made up the data or why — and four of the five authors said they played no part in collecting the data for the test in question.
That leaves Ariely, who confirmed that he alone was in touch with the insurance company that ran the test with its customers and provided him with the data. But he insisted that he was innocent, implying it was the company that was responsible. “I can see why it is tempting to think that I had something to do with creating the data in a fraudulent way,” he told BuzzFeed News. “I can see why it would be tempting to jump to that conclusion, but I didn’t.”
. . . .
But Ariely gave conflicting answers about the origins of the data file that was the basis for the analysis. Citing confidentiality agreements, he also declined to name the insurer that he partnered with. And he said that all his contacts at the insurer had left and that none of them remembered what happened, either.
According to correspondence reviewed by BuzzFeed News, Ariely has said that the company he partnered with was the Hartford, a car insurance company based in Hartford, Connecticut. Two people familiar with the study, who requested anonymity due to fear of retribution, confirmed that Ariely has referred to the Hartford as the research partner.
The Hartford did not respond to multiple requests for comment from BuzzFeed News. Ariely also did not return a request for comment about the insurer.
. . . .
The imploded finding is the latest blow to the buzzy field of behavioral economics. Several high-profile, supposedly science-backed strategies to subtly influence people’s psychology and decision-making have failed to hold up under scrutiny, spurring what’s been dubbed a “replication crisis.” But it’s rarer that data is faked altogether.
And this is not the first time questions have been raised about Ariely’s research in particular. In a famous 2008 study, he claimed that prompting people to recall the Ten Commandments before a test cuts down on cheating, but an outside team later failed to replicate the effect. An editor’s note was added to a 2004 study of his last month when other researchers raised concerns about statistical discrepancies, and Ariely did not have the original data to cross-check against. And in 2010, Ariely told NPR that dentists often disagree on whether X-rays show a cavity, citing Delta Dental insurance as his source. He later walked back that claim when the company said it could not have shared that information with him because it did not collect it.
Link to the rest at Buzzfeed News
PG picked an online random number generator at random.
Somewhere in his brain, he remembered reading that random numbers generated by a computer are not truly random numbers, but are pseudo random numbers – he is not certain of the difference, but expects picking an online random number generator by entering “Random Number Generator” into Google and picking one of the first listings to appear is a pseudo random number generator search. Or something.
At any rate, here is a list of ten random numbers that PG created with the online random number generator – pseudo or non-pseudo, he can’t tell the difference:
If, as the OP’s suggested, the main culprit is a TED Talk speaker and a Wall Street Journal advice columnist who used a random number generator to create the mileage figures upon which the whole ground-breaking study was based, it makes PG question the expertise of TED Talk speakers and Wall Street Journal advice columnists.
Additionally, is there a reason why none of these heavy-duty university mathematics and data science experts never noticed that none of the numbers in the original data was rounded off?