by GINA KOLATA
Science, so the story goes, is a meticulously built edifice. Discoveries balance on ones that preceded them. Research is stimulated by studies that went on before.
But what, then, can explain the findings by two investigators atJohns Hopkins University School of Medicine? The researchers, Karen A. Robinson and Dr. Steven N. Goodman, looked at how often published papers on clinical trials in medicine cite previous clinical trials addressing the same question.
They report in the Jan. 4 issue of Annals of Internal Medicine what Dr. Goodman describes as “a rather shocking result.” He summarizes: “No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them.”
“As cynical as I am about such things, I didn’t realize the situation was this bad,” Dr. Goodman said.
It seems, Dr. Goodman said in an e-mail, that “either everyone thinks their study is really unique (when others thought it wasn’t), or they want to unjustifiably claim originality, or they just don’t know how or want to look.”
The situation can have serious consequences for patients, said Sir Iain Chalmers, editor of the James Lind Library, which is a source of information on appropriate tests of medical treatments. He said some patients have suffered severe side effects and even died in studies because researchers were not aware of previous studies documenting a treatment’s dangers.
“That’s the tragedy,” he said. “Not only is it unscientific, it is unethical.”
Dr. Goodman said their results might help explain a troubling phenomenon in medicine: All too often, despite a multitude of clinical trials on a particular subject, the data do not supply the answers doctors need to treat patients.
“This shows part of what’s behind it,” Dr. Goodman said. Failure to cite can affect hypotheses and conclusions.
“If you are not citing the most similar studies, it is really hard to imagine that the evidence they provided played a role in the formulation of your hypothesis,” Dr. Goodman said. And, he added, if researchers do not cite other studies, they cannot play a role in formulating conclusions. “If the eighth study is positive, and the preceding seven were cold negative, is it proper to report that a treatment works?” he said. “This may not be the fire, but it’s a heck of a lot of smoke.”
Dr. Robinson, an assistant professor in the divisions of internal medicine and health sciences informatics, and Dr. Goodman, a professor of epidemiology and biostatistics and editor of the journal Clinical Trials, began their study by identifying 227 meta-analyses, which are studies that combine relevant previous studies to glean data from pooled evidence. For example, a meta-analysis might collect all studies about a drug whose effectiveness in individual studies was sometimes equivocal. The analysis would ask whether, with all results combined, the drug may seem to work.
The 227 meta-analyses cited a total of 1,523 clinical trials. For each clinical trial, the investigators asked how many of the other trials, published before it on the same topic and included with it in the meta-analysis, were cited.
They never expected so few.
“It is a pretty bad situation,” Dr. Goodman said. “There were rumblings of this before, but they did not show the phenomenon like this does.”
He adds that he and Dr. Robinson did not ask whether investigators cited prior studies in their grant applications, nor do they know for sure why so little previous research is cited.
One reason might be that investigators do not think many of the results from previous studies apply to theirs.
That is why, in a recent paper in Hepatology, Dr. Stephen Harrison of Brooke Army Medical Center in San Antonio and his colleagues did not cite any of 10 clinical trials used in a meta-analysis that followed his paper. He was studying the effects of a weight loss drug, orlistat, on liver function in overweight patients with fatty liver disease.
Explaining why he failed to cite the other studies, he said, “I limited my discussion mainly to therapies that had been studied in fatty liver disease, not just obesity or diabetes.”
There are several steps along the way to a published paper where researchers might be asked about already published papers on the same topic. Those who finance the research, the ethics committees that review some studies and the journals that publish the studies all could ask the investigators how they assured themselves they had found prior relevant results.
But, Dr. Goodman said, none of those groups feel any official responsibility.
“It’s sort of a blind spot,” he said. “People sort of assume researchers know the literature, know their own field, know the studies.”
This article has been revised to reflect the following correction:
Correction: January 22, 2011
An article on Tuesday about the failure of many published papers on clinical trials in medicine to cite previous related studies, using information from researchers at Johns Hopkins University School of Medicine, erroneously included work by Dr. Beverly B. Green of Group Health in Seattle as an example of such a lapse. One of Dr. Green’s papers, about blood pressure monitoring and control, and published in The Journal of the American Medical Association, did in fact cite a scientific review of prior studies that included many of the 21 trials that the Hopkins researchers said her paper ignored. It was not the case that her paper failed to cite any of those 21 studies. (The article included a response from Dr. Green in which she explained her general approach to choosing citations, but did not specifically address the 21 trials. After the article appeared, she contacted The Times to give a more precise rebuttal to the Johns Hopkins findings.)
Article link: http://www.nytimes.com/2011/01/18/health/research/18cite.html?_r=2&ref=health
Published on January 25, 2011
Join the Discussion