Sir
When suspected scientific misconduct occurs in a research department, it is likely that more than one person knows about it. In their Commentary (Nature 453, 980—982; 2008), Sandra Titus and colleagues avoid the multiple-reporting problem in estimating the incidence of misconduct by surveying one person per academic department about suspected misconduct within that department. However, I question their extrapolation of these survey results, which they claim projects an alarming picture of under-reporting.
The authors derive a rate of 0.03 cases of suspected misconduct per department per year, but settle on a more conservative figure of 0.015. They then apply this rate to the total population of 155,000 researchers funded by the US National Institutes of Health (NIH), arriving at an extrapolated estimate of a minimum of 2,325 cases of suspected misconduct per year.
It is not appropriate to extrapolate from a sample of departments to a universe of individuals. Applying the 0.03 rate to a rough estimate of 10,000 departments with NIH funding, the authors could claim an extrapolated estimate of only 300 cases of suspected misconduct per year.
Titus and colleagues cite our earlier study (J. Swazey, M. Anderson and K. Lewis Am. Sci. 81, 542–553; 1993) as methodologically weak in its estimate of misconduct incidence, because we allowed multiple reports within departments. The difference is that we neither aimed nor claimed to measure incidence, but rather to measure scientists' exposure to suspected misconduct. The authors' extrapolation seems, like ours, to estimate exposure and not incidence.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Swazey, J. Integrity: how to measure breaches effectively. Nature 454, 575 (2008). https://doi.org/10.1038/454575a
Published:
Issue Date:
DOI: https://doi.org/10.1038/454575a