Non-Results in Clinical Trials and Beyond

September 27th, 2004

Posted by: Roger Pielke, Jr.

Last week’s Economist reports on a major change in the oversight of clinical trials. To date, not all clinical trials have been reported, which means that inconclusive results can be hidden and successful trials highlighted. Why does this matter? Consider what happens when studies are evaluated using the typical 95% threshold of statistical significance, this means that 1 in 20 results, on average, will be spurious just by chance. So when testing the effects of a new drug, with enough clinical trials, there will inevitably be a statistically significant positive result at some point, even if the drug is ineffective. Hence the importance of knowing the results of all clinical trials related to a particular drug.

The Economist notes:
“Legislation is in the works in both houses of America’s Congress to reform the reporting of trials. In particular, Chris Dodd, Tim Johnson and Edward Kennedy, three Democratic senators, are expected to propose, within the next week or two, a law that would increase compliance with existing requirements to post trial data to clinicaltrials.gov. It would probably adopt a proposal made by the AMA that registration in a central database be a requirement for the approval of human trials, as well as introducing new requirements to include trial results in the database.”

(For another view see the PhRMA www site here.)

There is no shortage of criticisms of methods used to assess the significance of a finding, whether the effects of a drug or any other cause-effect relationship.


What there is a shortage of (apparently) is the reporting of “non-findings” in pretty much any area of science in which statistical (or other types of) significance testing is reported as a result of research. Whether the issue happens to be the creation of a model of an open system (which can easily be “tuned” to fit data) or the establishment of correlative relationships among variables (which can be mixed and matched, and a hypothesis developed after a match is made), non-findings would seem to be pretty important to understanding the reported results of many areas of science related to decision making, especially those studies that are not easily replicated (i.e., in contrast to those that occur in the controlled setting of a lab). I’ve seen this issue discussed and recognized in disciplines as varied as political science and meteorology, but I’ve seen little action in reponse.

I often wonder about this when I see colourful Powerpoint presentations in which the speaker shows an image of results related to, e.g., a weather forecast, a climate model scenario, crop production, an ecosystem’s evolution, economic or political outcomes etc. etc. that presents some very close comparison with some verifying data. I am sure that the speaker surely has not chosen the image at random, but to put the best possible face on their research results. But I wonder, what does the entire family of possible images look like? This question is rarely asked or answered.

Because of these dynamics, just as in the case of clinical drug trials, in areas of interdisciplinary science dealing with open or non-stationary systems, much of what we think is significant is likely less significant than we think. At least today I’m 95% certain of this, but ask me 20 times to be sure!

Comments are closed.