(Mis)Interpreting the p-value

“When a researcher accepts a 95% confidence level for a statistical test, this means that the researcher accepts a 1-in-20 chance of reporting nominally significant results that are, in fact, spurious. If a journal contains 20 articles, and each article presents a single result that is claimed to be significant at the 95% confidence level, then, on average, 1 of the 20 articles is presenting spurious results.”

D. Huron, “ON THE VIRTUOUS AND THE VEXATIOUS IN AN AGE OF BIG DATA”, Music Perception, 31, no. 1 (2013): 4-9.

Advertisements

2 thoughts on “(Mis)Interpreting the p-value”

1. Julián says:

Not quite. We often mistake the p-value with the sig. level alpha. With alpha=.05 there are *at most* 1 in 20. In reality, p-values are often much smaller than 0.05; that statement is only true if all p-values are 0.05. In any case, I can’t stress it enough: people worry way too much about p-values and ignore the important stuff: assumptions and samples.

Like

• Simply put, the probability that a journal article presents “spurious results” given the alpha it reports to use does not equal alpha. As you say, too much importance is placed in these numbers at the exclusion of the truly relevant stuff.

Like