A book excerpt – Retraction Watch

We are pleased to present an excerpt from Mistrust: Big Data, Data-Torturing, and the Assault on Science, a new book by Pomona College economics professor Gary Smith. The Washington Post said the book’s lessons “are very much needed.”

The fact that bitcoin price changes are driven by fear, greed and manipulation hasn’t stopped people from trying to crack their secret. Empirical models of bitcoin prices are a fantastic example of data torture because bitcoins have no intrinsic value and therefore cannot be credibly explained by economic data.

Undeterred by this reality, an article from the National Bureau of Economic Research (NBER) reported the incredible efforts made by Yale University economics professor Aleh Tsyvinski and a graduate student, Yukun Liu, to find empirical patterns in bitcoin prices.

Tsyvinski currently holds an endowed chair named after Arthur M. Okun, who had been a professor at Yale from 1961 to 1969, although he spent six of those eight years on leave so he could work in Washington on the Council of Economic Advisors as a staff economist, council member and then chairman, advising Presidents John F. Kennedy and Lyndon Johnson on their economic policies. He is best known for Okun’s Law, which states that a 1 percentage point reduction in unemployment will increase US output by about 2 percent, an argument that helped convince President Kennedy that using tax cuts to reduce unemployment from 7 to 4 percent a huge financial gain.

After Okun’s death, an anonymous donor endowed a lecture series at Yale named after Okun, explaining that

Arthur Okun combined his special gifts as an analytical and theoretical economist with his keen concern for the well-being of his fellow citizens into a thoughtful, pragmatic and sustained contribution to the nation’s public policy.

The contrast between Okun’s focus on meaningful economic policy and Tsyvinski’s far-fetched bitcoin calculations is striking.

Liu and Tsyvinski report correlations between the number of weekly Google searches for the word bitcoin (compared to the average over the past four weeks) and percentage changes in bitcoin prices one to seven weeks later. They also looked at the correlation between the weekly ratio of bitcoin hack searches to bitcoin searches and percentage changes in bitcoin prices one to seven weeks later. The fact that they reported bitcoin search results looking back four weeks and seven weeks ahead should alert us to the possibility that they tried other backward-forward combinations that didn’t work as well. Ditto that they didn’t look back four weeks of bitcoin hack searches. They obviously tortured the data in their search for correlations.

Yet only seven of their fourteen correlations appeared promising for predicting bitcoin prices. Owen Rosebeck and I looked at the predictions made from these correlations in the year following the study and found them to be useless. They might as well have flipped coins to predict bitcoin prices.

Liu and Tsyvinski also calculated the correlations between the number of weekly Twitter bitcoin posts and bitcoin returns one to seven weeks later. Unlike Google’s trending data, they did not report results for bitcoin hack posts. Three of the seven correlations appeared useful, although two were positive and one was negative. With fresh data none were helpful.

The only thing their data abuse yielded were random statistical correlations. Although the research was done by an eminent Yale professor and published by the prestigious NBER, the idea that bitcoin prices can be reliably predicted from Google searches and Twitter posts was a fantasy fueled by computer torture.

The irony here is that scientists created statistical tools that were meant to ensure the credibility of scientific research, but which have had the perverse effect of encouraging scientists to torture data – making their research unreliable and undermining the credibility of all scientific research.

Gary Smith

Traditionally, empirical research begins by specifying a theory and then collecting appropriate data to test the theory. Many people are now taking the shortcut of looking for patterns in data that are unencumbered by theory. This is called data mining in that researchers rummage through data, without knowing what they will find.

Way back in 2009, Marc Prensky, an author and speaker with degrees from Yale and Harvard Business School, argued that

In many cases, researchers no longer need to make educated guesses, construct hypotheses and models, and test them with computer-based experiments and examples. Instead, they can mine the entire set of data for patterns that reveal effects, and produce scientific conclusions without further experimentation.

We are hard-wired to seek patterns, but the flood of data renders the vast majority of patterns waiting to be discovered illusory and useless. Bitcoin is again a good example. Since there is no logical theory (other than greed and market manipulation) that explains fluctuations in bitcoin prices, it is tempting to look for correlations between bitcoin prices and other variables without thinking too much about whether the correlations make sense. In addition to torturing data, Liu and Tsyvinski mined their data.

They calculated correlations between bitcoin prices and 810 other variables, including capricious elements such as the exchange rate between the Canadian dollar and the US dollar, the price of crude oil and stock returns in the auto, book and beer industries. You might think I’m making this up. Unfortunately, I’m not.

They reported that they found bitcoin returns to be positively correlated with consumer staples and healthcare stock returns and negatively correlated with manufactured goods and metal mining stock returns. These correlations make no sense, and Liu and Tsyvinski admitted they had no idea why these data were correlated: “We offer no explanations . . . . We only document this behavior.” A skeptic may ask: What is the point of documenting random connections?

And that’s all they found. The Achilles heel of data mining is that large data sets inevitably contain a huge number of random correlations that are only fool’s gold in that they are no more useful than correlations between random numbers. Most random correlations do not hold up with fresh data, although some will happen to do so for a while. A statistical relationship that continued to hold during the period they studied and the year after was a negative correlation between bitcoin returns and stock returns in the cardboard containers and boxes industry. This is surely serendipitous – and pointless.

Scientists have compiled huge databases and created powerful computers and algorithms to analyze data. The irony is that these resources make it very easy to use data mining to discover random patterns that are fleeting. The results are reported and then discredited, and we become increasingly skeptical of scientists.

Like Retraction Watch? You can make one tax-deductible contribution to support our workfollow us on Twitterlike us on Facebookadd us to yours RSS readeror subscribe to our daily digestion. If you find a withdrawal that is not in our databaseYou can let us know here. For comments or feedback, email us at [email protected].

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *