A new adaptive approach to designing medical studies could ensure that more patients get the most beneficial treatment, while still yielding research results that stand up to scrutiny. The approach aims to overcome the chicken-and-egg problem in medical research: Not enough people volunteer for studies of new treatments partly because researchers can’t promise the studies will help them, but without enough volunteers, researchers can’t study new treatment options.

The “adaptive” approach to clinical trial design centers around how patients are randomly assigned to one of the two or more groups in a study. In a non-adaptive trial, everyone who volunteers from the first patient to the last gets assigned with what amounts to a coin toss, and the groups end up being of similar size.

But in an adaptive trial, the trial’s statistical algorithm constantly monitors the results from the first volunteers, and looks for any sign that one treatment is better than another. It doesn’t tell the patients or the study doctors what they’re seeing, but it does start randomly assigning slightly more patients into the group that’s getting the treatment that is starting to look better. As the researchers put it, the trial “learns” along the way.

“It’s a way of assigning patients at slightly less random chance, allowing us to do what might be in the best interest of each patient as the trial goes along,” says William Meurer, MD, assistant professor of emergency medicine and neurology at the University of Michigan Medical School and lead author of the article, which appears in the Journal of the American Medical Association (JAMA).

By the end of the trial, one of the groups of patients will therefore be larger, which means the statistical analysis of the results will be trickier and the results might be a little less definitive. But if the number of patients in the trial is large, and if the difference between treatments is sizable, the results will still have scientific validity, Meurer says.

The authors of this study do concede that when researchers just want to compare two standard treatments to make sure one isn’t grossly inferior, or when they want to pinpoint the precise impact of a preventive measure (such as aspirin) across a large population (such as heart attack survivors), adaptive designs usually won’t help.

Pharmaceutical companies and medical device manufacturers have been faster to adopt adaptive design for their trials, but academic centers that conduct huge numbers of non-industry trials have not.

"Adaptive design gives us the potential to get it right and put more people where the bang for the buck is, but still have the change be invisible to the physicians and staff carrying out the trial," Meurer says. "If a particular option helps patients about 10% more than other options, but the adaptive design’s impact on the statistical results means that you can only say the effect is somewhere between 9% and 11%, the tradeoff is still worth it."

Source: University of Michigan Health System