A joint UCLA-Harvard study finds that studies about medications published in the most influential medical journals are frequently designed in a way that yields misleading or confusing results. The findings appear in the Journal of General Internal Medicine.
For the study, researchers analyzed all the randomized medication trials published in the six highest-impact general medicine journals between June 1, 2008, and September 30, 2010, to determine the prevalence of three types of outcome measures that make data interpretation difficult. In addition, they reviewed each study’s abstract to determine the percentage that reported results using relative rather than absolute numbers, which can be misleading.
The six journals examined by the researchers—the New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, the Annals of Internal Medicine, the British Medical Journal, and the Archives of Internal Medicine—included studies that used the following types of outcome measures, which have received increasing criticism from scientific experts, according to the researchers:
- Surrogate outcomes (37% of studies), which refer to intermediate markers, such as a heart medication’s ability to lower blood pressure, but which may not be a good indicator of the medication’s impact on more important clinical outcomes, like heart attacks.
- Composite outcomes (34%), which consist of multiple individual outcomes of unequal importance lumped together—such as hospitalizations and mortality—making it difficult to understand the effects on each outcome individually.
- Disease-specific mortality (27%), which measures deaths from a specific cause rather than from any cause; this may be a misleading measure because, even if a given treatment reduces one type of death, it could increase the risk of dying from another cause, to an equal or greater extent.
Additionally, the researchers found that trials that used surrogate outcomes and disease-specific mortality were more likely to be exclusively commercially funded—for instance, by a pharmaceutical company.
While 45% of exclusively commercially funded trials used surrogate endpoints, only 29% of trials receiving non-commercial funding did. And while 39% of exclusively commercially funded trials used disease-specific mortality, only 16% of trials receiving non-commercial funding did. The researchers suggest that commercial sponsors of research may promote the use of outcomes that are most likely to indicate favorable results for their products.
This study also shows that 44% of study abstracts reported study results exclusively in relative—rather than absolute—numbers, which can be misleading.
“It’s one thing to say a medication lowers your risk of heart attacks from two-in-a-million to one-in-a-million, and something completely different to say a medication lowers your risk of heart attacks by 50%. Both ways of presenting the data are technically correct, but the second way, using relative numbers, could be misleading,” said Danny McCormick, MD, the study’s lead author and a physician at the Cambridge Health Alliance and Harvard Medical School.
Still, the researchers acknowledge that the use of surrogate and composite outcomes and disease-specific mortality is appropriate in some cases. For example, these outcomes may be preferable in early-phase studies in which researchers hope to quickly determine whether a new treatment has the potential to help patients.
Source: UCLA Health Sciences