U.S. law requires posting summarized results on ClinicalTrials.gov, a service of the National Institutes of Health, within one year of study completion for certain categories of industry-sponsored trials.
The European Union is considering following the US lead yet in some fields compliance with the U.S. law is still rather poor.
Does it matter? There is increasing public pressure to report the results of all clinical trials. The belief if this would eliminate publication bias and improve public access but that is not evidence-based. What is the point of reading about failed industry trials? The products can't be approved, they will never be released and the start-up company behind the work will be sold off for parts soon after.
And what are unbiased results? US law requires companies to pay for trials - if anything is getting approved, it has to undergo successful clinical trials at company expense. There are no 'unbiased' labs, not in academia or the EPA or the FDA. Calling a trial biased because it is paid for by a company - which is mandated by law - is intentionally skewing the cultural picture.
In a new paper, authors make the case for more publication based seemingly on the fact that they had to do work to find results for their paper. They looked at the Repository of Registered Analgesic Clinical Trials (RReACT) database, a scorecard for analgesic clinical trials for chronic pain (sponsored by an FDA grant to the University of Rochester), to describe the challenge of constructing a global open-access database of clinical trials and trial results. So it seems odd that they would indict the entire system based on that.
"We identified several perils and pitfalls of using the ICTRP," says Michael C. Rowbotham, MD, scientific director of the California Pacific Medical Center Research Institute in San Francisco. "Manual searches are necessary, as ICTRP does not reliably identify trials listed on multiple registries. Searching ICTRP as a whole yields different results from searching registries individually. Outcome measure descriptions for multiply-registered trials vary between registries. Registry-publication pairings are often inaccurate or incomplete. Ideally, a PubMed search on the trial registration number would reveal all study-related articles, but a recent analysis showed that about 40% of journal publications failed to include registration numbers. And grey literature results--such as trial-specific press releases or company statements, information found on the websites of pharmaceutical companies, and abstracts of poster/platform presentations at scientific meetings--are not permanent.
The analysis focused on three frequently studied chronic pain diagnoses: post-herpetic neuralgia, fibromyalgia, and painful diabetic peripheral neuropathy. The initial build of RReACT was limited to randomized trials registered on ClinicalTrials.gov with a primary (or key secondary) outcome measure assessing analgesic drug efficacy - which was already difficulty for a condition like fibromyalgia, which is often a diagnosis when someone insists they are in pain and a doctor can't find anything.
The database was then expanded to report on all of the primary registries in the World Health Organization's International Clinical Trials Registry Platform (ICTRP). The authors analyzed trial registration, registry functionality, and cross-registry harmonization, using a comprehensive search algorithm to find trial results in the peer-reviewed literature and - and grey literature. Yes, they affirm their belief that there is publication bias by ironically using grey literature.
A total of 447 unique trials were identified, with 86 trials listed on more than one registry.
The ICTRP provides a single search portal to 15 primary registries, including ClinicalTrials.gov, which makes up the majority of the ICRTP anyway. ICTRP primary registries follow International Committee of Medical Journal Editors (ICMJE) guidelines and must have a national or regional focus, government support, nonprofit management, free public access, and an unambiguous trial identification method.
ClinicalTrials.gov is the largest ICTRP database, with more than 152,000 trials globally. The EU Clinical Trials Register (EU-CTR) is the second largest, with more than 21,000 trials. Current Controlled Trials (more than 11,000 trials), the oldest global registry, is hosted by BioMed Central. Five national registries each contain fewer than 1,000 trials. All ICTRP registries provide information about study design (i.e., randomization, blinding, control groups, inclusion/exclusion criteria, and outcome measures) and current study status.
Not all ICTRP registries track study changes, list additional study identifiers, or provide links to publications.
"Creating a single global registry would solve many of the problems we describe here," Rowbotham says. "However, international politics and funding limitations suggest this is a challenging goal. Despite its flaws, ICTRP does at least offer a single search portal."
They offer several suggestion. In addition to the simple remedy of including trial registration numbers on all meeting abstracts and peer-reviewed papers, they propose specific strategies to identify multiply-registered studies and ensuring accurate pairing of results and publications.
"Compliance might improve, especially for difficult-to-publish 'negative' studies, if posting results on trial registries could be made simpler and uniform. Alternative solutions to the problems of publication bias and selective reporting should also be explored. These might involve including journals specializing in publishing 'negative' results, creating user-friendly and publicly available databases to publish results, and raising the awareness of authors, reviewers, and editors about these issues," concludes Rowbotham.
Published in the journal PAIN