Clinical studies, the central means by which preventive, diagnostic, and therapeutic strategies are evaluated,  that were registered between 2007-2010 were dominated by small, single-center trials and contained significant heterogeneity (different in nature, difficult to compare) in methodological approaches, including the use of randomization, blinding, and data monitoring committees, according to an analysis in JAMA

In 1997, Congress mandated the creation of the ClinicalTrials.gov registry to assist people with serious illnesses in gaining access to trials; and in September 2004, the International Committee of Medical Journal Editors (ICMJE) announced a policy, which took effect in 2005, of requiring registration of clinical trials as a prerequisite for publication.

Robert M. Califf, M.D., of the Duke Translational Medicine Institute, Durham, N.C., and colleagues conducted a study to examine fundamental characteristics of interventional clinical trials registered in the ClinicalTrials.gov database, focusing on study characteristics (data elements reported in trial registration) that are desirable for generating reliable evidence from clinical trials. A data set comprising 96,346 clinical studies from ClinicalTrials.gov was downloaded on September 27, 2010, and entered into a relational database to analyze aggregate data. Interventional trials were identified and analyses were focused on 3 clinical specialties—cardiovascular, mental health, and oncology—that together encompass the largest number of disability-adjusted life-years lost in the United States. The researchers analyzed the characteristics of registered clinical trials as reported data elements in the trial registry; how those characteristics have changed over time; differences in characteristics as a function of clinical specialty; and factors associated with use of randomization, blinding, and DMCs.

Analysis of the data indicated that the number of trials submitted for registration increased from 28,881 (October 2004 – September 2007) to 40,970 (October 2007 – September 2010). While the database was filled in more completely in the latter time period, many trials reported that they are not using data monitoring committees in the latest time period (59.4 percent). The majority of clinical trials were small in terms of numbers of participants. Overall, 96 percent of these trials had an anticipated enrollment of 1,000 or fewer participants and 62 percent had 100 or fewer participants. The median (midpoint) number of participants per trial was 58 for completed trials and 70 for trials that have been registered but not completed.

"The U.S. clinical trials enterprise has been marked by debate regarding funding priorities for clinical research, the design and interpretation of studies, and protections for research participants," according to background information in the article.
"Recent work highlights the inadequate evidence base of current practice, in which less than 15 percent of major guideline recommendations are based on high-quality evidence, often defined as evidence that emanates from trials with appropriate designs; sufficiently large sample sizes; and appropriate, validated outcome measures, as well as oversight by institutional review boards and data monitoring committees (DMCs) to protect participants and ensure the trial's integrity."

Data on funding source and number of sites were available for 37,520 of 40,970 clinical trials registered during the 2007-2010 period. The largest proportion of these trials were not funded by industry or the National Institutes of Health (NIH) (47 percent, n = 17,592) with 16,674 (44 percent) funded by industry, 3,254 (9 percent) funded by the NIH, and 757 (2.0 percent) funded by other U.S. federal agencies. The majority of trials were single site (66 percent); 34 percent were multisite trials.

"Heterogeneity in the reported methods by clinical specialty; sponsor type; and the reported use of DMCs, randomization, and blinding was evident," the authors write. "For example, reported use of DMCs was less common in industry-sponsored vs. NIH-sponsored trials, earlier-phase vs. phase 3 trials, and mental health trials vs. those in the other 2 specialties. In similar comparisons, randomization and blinding were less frequently reported in earlier-phase, oncology, and device trials."

The authors note that the finding of substantial differences in the use of randomization and blinding across specialties raises "fundamental questions about the ability to draw reliable inferences from clinical research conducted in that arena."

The researchers add that the fact that 50 percent of interventional studies registered from October 2007 to September 2010 by design include fewer than 70 participants may have important policy implications. "Small trials may be appropriate in many cases. … However, small trials are unlikely to be informative in many other settings, such as establishing the effectiveness of treatments with modest effects and comparing effective treatments to enable better decisions in practice."

"Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high-quality evidence needed to ensure confidence in guideline recommendations. Given the deficit in evidence to support key decisions in clinical practice guidelines as well as concerns about insufficient numbers of volunteers for trials, the desire to provide high-quality evidence for medical decisions must include consideration of a comprehensive redesign of the clinical trial enterprise."

Citation: JAMA. 2012;307[17]:1838-1847. 

Writing in an editorial called "The Evolution of Trial Registries and Their Use to Assess the Clinical Trial Enterprise", Kay Dickersin, M.A., Ph.D., of the Johns Hopkins Bloomberg School of Public Health, Baltimore, and Drummond Rennie, M.D., of the University of California, San Francisco, and Deputy Editor, JAMA, write that "it appears that despite important progress, ClinicalTrials.gov is coming up short, in part because not enough information is being required and collected, and even when investigators are asked for information, it is not necessarily provided. As a consequence, users of trial registries do not know whether the information provided through ClinicalTrials.gov is valid or up-to-date."

"Trial registration is not some bureaucratic exercise but partial fulfillment of a promise to the patients who agree to participate in these trials on the understanding that the information learned will be made public. Given the evidence that registration of trials at inception can benefit patients, it is difficult to understand why some investigators and sponsors take this responsibility so lightly. Trial registries do not evolve on their own. Their content and the transparency they provide is influenced by investigators, systematic reviewers, clinicians, journal editors, sponsors, and regulators and also by patients and the public. Only through the generosity and positive engagement of all will something emerge that is truly useful."

Citation: JAMA. 2012;307[17]:1861-1864.