Does a widespread medical emergency justify speedier, and sometimes less rigorous, ways to test treatments and evaluate results? Doctors and patients urgently need to get their hands on drugs for the COVID-19 pandemic. But bioethicists Jonathan Kimmelman of McGill University and Alex John London of Carnegie Mellon University argue in an April 23 Science article that hurried trials and tests can do more harm than good. They highlight hastily published case reports that, they contend, can lead doctors to believe some drugs offer more of a benefit than has been proved.
The researchers also draw parallels with the 2013–2016 Ebola outbreak, in which, they say, relaxed scientific standards for medication trials led to a paucity of effective treatments. Kimmelman says physicians should be able to try experimental drugs on desperately ill patients on a so-called compassionate use basis. But he argues that these efforts should not displace careful science. He discussed these concerns with Scientific American.
[An edited transcript of the interview follows.]
What are some examples of medical reports about COVID-19 treatments that can cause problems?
One example is a paper that was published in the New England Journal of Medicine about the [experimental antiviral drug] remdesivir. This was not a clinical trial. This was a series of case reports using remdesivir under the compassionate use mechanism. [Scientific American asked the New England Journal of Medicine for a response, and a spokesperson declined to comment.]
But the journal said it was a report about compassionate use. Isn’t there value in publishing case reports as long as it is clear they are not randomized clinical studies?
If you can assume that your audience has sufficient sophistication to interpret the article as such, then I don’t see a problem with it. But those conditions are not really holding. If you’re a working doctor, you’re busy. You don’t have time to sit down and carefully read the reports. You’re treating patients, and you see, “Oh, there’s a paper about remdesivir being effective.”
You argue such reports can make future research harder. How so?
If you want to run a rigorous clinical trial to determine whether or not the risks and expense of using [remdesivir] are worthwhile, you need to have a control group. It’s going to be hard to invite patients to go into a trial where they have a 50 percent chance of getting a placebo if most physicians, and most patients, believe [the drug is] already proved to be effective.
Shouldn’t physicians have the ability, in this pandemic, to treat critically ill patients with experimental medications they think could help?
I don’t have a problem with compassionate use, provided that it is not interfering with the efficient conduct of clinical research. My concern is when compassionate use begins to siphon patients who might otherwise be eligible for clinical trials away from those clinical trials. Or when it begins sopping up enormous resources that, in my opinion, we should be directing toward establishing that these treatments are actually effective, as opposed to throwing darts at a dartboard.
But what if there’s a drug that looks promising in early reports and has been shown to be safe?
Medicine is replete with examples of treatments that looked really, really promising in case reports or in small clinical trials or in larger but poorly designed clinical trials. But then, when they were put to rigorous evaluation in a properly designed and reported randomized controlled trial, they turned out to be ineffective or—even worse—harmful, compared with the standard of care. The field of Alzheimer’s disease is littered with drugs that looked really promising in phase II clinical trials but that turned out to be ineffective when they were put into phase III clinical trials.
Is it possible to conduct clinical trials that are both fast and scientifically solid?
In my opinion, it is. But it requires doing science differently than we normally do it. There are clinical trial designs called master protocols that allow you to evaluate many interventions in a single clinical trial. Study arms or interventions can be added or dropped, depending on whether or not there’s a new treatment that looks really promising. Because the process is seamless, there is less dead time between the end of one trial and the beginning of another. The World Health Organization’s Solidarity trial [for four COVID-19 treatments] is an example. But those kinds of studies require a lot of coordination.
You say that the results from many of the trials conducted during the 2013–2016 Ebola outbreak were inconclusive. What went wrong?
Many people argued that (a) it would be unethical to put people into a placebo group [because the death rate was so high] and (b) we shouldn’t demand exacting scientific standards when we’re evaluating these treatments. There were around eight treatment trials conducted, only one of which actually was a randomized controlled trial that used the proper comparator group. At the end of the day, we still don’t have a clear sense of whether most of these treatments, including convalescent plasma and the antiviral drug favipiravir, do more harm than good and whether they’re worth deploying.
Medicine has made plenty of breakthroughs without clinical trials, has it not?
Penicillin is a good example of that. And there are many other examples in cancer. But they are the exception. It turns out that most drugs have small effects, and you need to observe them in a lot of patients in order to be able to detect a clear signal that they are beneficial. I think some people think we’re going to hit a home run or a grand slam [with a new drug]. Grand slams, both in baseball as well as in medicine, are extremely rare. Your strategy shouldn’t be to get grand slams all the time. It should be to get people on base. That’s much more realistic, and you can win a lot more games that way.
Read more about the coronavirus outbreak here.