Among scientist, there is growing concern with the increasing publication bias, the tendency of researchers to only report positive and significant results. This has prompted several researchers to explore how scientist report p-values. (Bruns, S. B. et all., 2015; Head, M. L., et al, 2015)

Some researchers such as David Chavalarias et al., (2016) explored the proportions of p-values reported in the different sections of papers such as results and abstract sections over time and among different categories. They also focused on the type of p-values, significant or not, and the extent to which significant p-values are reported as absolute (with equal operator) or as intervals (with sing grater or lower than). Finally, they explored the number and proportion of abstracts and full-text articles that included at least 1 P value ≤.05.

Other researchers have focused on a particular practice when reporting p-values, p-hacking. This practice consists of manipulating the value of a non-significant p-value to make it appear significant. In order to determine if p-hacking is present in a body of research, Simonsohn et al. (2014), developed the denominate p-curve. The p-curve is the distribution of statistically significant p-values for a set of studies and allows it to assess the reliability of published research.

According to the theory, when there is not an effect size, the distribution of p-values is uniform. On the other hand, when there is an effect, the distribution of p-values is exponential with a right skew, so it is more likely to obtain very low p-values, and less likely still to obtain non-significant p-values. So, as the true effect size increases the p-curve is more right-skewed. If there is p-hacking, the p-curve’s shape will be altered close to the significance threshold (usually p = 0.05). Thus, a p-hacked p-curve will have an overabundance of p-values just below 0.05.

Citations

Bruns, S. B., & Ioannidis, J. P. A. (2016). p-Curve and p-Hacking in Observational Research. Plos One, 11(2), e0149144–13. http://doi.org/10.1371/journal.pone.0149144

Chavalarias, D., Wallach, J. D., Li, A. H. T., & Ioannidis, J. P. A. (2016). Evolution of Reporting PValues in the Biomedical Literature, 1990-2015. Jama, 315(11), 1141–8. http://doi.org/10.1001/jama.2016.1952

Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The Extent and Consequences of P-Hacking in Science. PLOS Biology, 13(3), e1002106–15. http://doi.org/10.1371/journal.pbio.1002106

Simonsohn, U., & Nelson, L. D. (2014). P-curve: a key to the file-drawer. Journal of Experimental.

Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results. Perspectives on Psychological Science, 9(6), 666–681. http://doi.org/10.1177/1745691614553988