(Mis)Interpreting the p-value

“When a researcher accepts a 95% confidence level for a statistical test, this means that the researcher accepts a 1-in-20 chance of reporting nominally significant results that are, in fact, spurious. If a journal contains 20 articles, and each article presents a single result that is claimed to be significant at the 95% confidence level, then, on average, 1 of the 20 articles is presenting spurious results.”

D. Huron, “ON THE VIRTUOUS AND THE VEXATIOUS IN AN AGE OF BIG DATA”, Music Perception, 31, no. 1 (2013): 4-9.


2 thoughts on “(Mis)Interpreting the p-value

  1. Not quite. We often mistake the p-value with the sig. level alpha. With alpha=.05 there are *at most* 1 in 20. In reality, p-values are often much smaller than 0.05; that statement is only true if all p-values are 0.05. In any case, I can’t stress it enough: people worry way too much about p-values and ignore the important stuff: assumptions and samples.


    • Simply put, the probability that a journal article presents “spurious results” given the alpha it reports to use does not equal alpha. As you say, too much importance is placed in these numbers at the exclusion of the truly relevant stuff.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s