Clinical Research News
Experts Call for Stricter Regulations in Clinical Trials of New Drugs
Mar 14 2011
When most people learn about the results of clinical trials for new medicines, it's either because something very good or very bad happened.Last year that was illustrated when Dendreon's prostate drug Provenge was approved in April by the U.S. Food and Drug Administration because clinical trials went well, and five months later Eli Lilly infamously had to pull its Alzheimer candidate drug semagacestat because test subjects using it were actually getting worse -- putting both in the news.
"There seems to be two narratives people can handle: Experiments are dangerous and you shouldn't take part, or experimentation is the engine for the next great wonder drug," said Alex John London, an associate professor of philosophy at Carnegie Mellon University who is the co-author of an article in the current issue of the journal PLoS Medicine that looks at this and related issues with clinical trials. "What we need is a more even set of expectations."
That's important, Mr. London said, because if people believe there are only two possible outcomes at either extreme, they might not be willing to participate in clinical studies.
"If people don't participate, studies don't happen," he said.
Mr. London and his colleague, Jonathan Kimmelman, an associate professor in the biomedical ethics unit at McGill University in Montreal, co-authored the article titled, "Predicting Harms and Benefits in Translational Trials: Ethics, Evidence and Uncertainty."
But while arguing that our expectations of clinical trials need to be more realistic and informed, Mr. London and Mr. Kimmelman also believe that the trials themselves need to be proposed, and run, more rigorously.
The article -- which Mr. London sees as "taking part in a wider conversation" with researchers, funders, review boards and other stakeholders -- goes right at what they see as some of the problems in both pre-clinical animal-based trials, and the transitional, first-in-human clinical trials where so many proposed drugs fall apart.
They both point out the problems they see in clinical trials, and make proposals for how they could be corrected -- with potentially far-reaching implications in drug research.
"I thought they raised incredibly important points," said Nancy Davidson, director of the University of Pittsburgh Cancer Institute and UPMC Cancer Centers. "I think Kimmelman and London are right: This black and white, either really good or really bad theme, isn't the answer. "
The co-authors were motivated to write the article, Mr. Kimmelman said, because "only rarely do the effects seen in animal studies translate into human studies."
That is, that even though researchers regularly find success with a proposed drug during research on animals, only infrequently do researchers achieve similar success with human test subjects.
They note in their paper that one study showed, for example, that only 5 percent of proposed cancer drugs that enter trials are eventually licensed.
"Drug discovery is hard," Mr. London said. "I don't think there's any way of getting around that."
And that gets to a central theme to their work, Mr. Kimmelman said: "What we're really trying to do here is check the expectations associated with major clinical findings."
That has important implications not only for possible human test subjects -- who need to have a better understanding of potential outcomes -- but for organizations that provide research funding "and who would want to move their resources into the most promising areas," Mr. London said.
Still, they believe clinical trial outcomes might be improved were it not for two main problems they have found.
The first is that most animal studies don't use the same generally accepted methods used in human trials -- such as randomization and blinded outcome assessment -- to prevent bias by the researcher.
They note that one recent analysis of animal studies found that only 12 percent of those studies used random allocation -- where animals are randomly assigned to take the proposed drug or to get a placebo -- and only 14 percent used blinded outcome assessment -- where a researcher doesn't know which animals were getting the drug and which were getting the placebo.
These are important because "researchers have an inherent bias in wanting to see a positive outcome" from the studies, Mr. Kimmelman said.
Moreover, Mr. London said that the routine use of randomization and blind testing in animal studies should increase the efficiency of the drug development process, "if for no other reason than that it enables us to better distinguish effects caused by new interventions from outcomes that are simply artifacts of the way we have set up the experiment."
Sean Savitz, a stroke researcher and associate professor of neurology at the University of Texas Medical School at Houston, said: "What they're getting is what we've been trying to get at in the stroke field -- that studies with animals are not being done as well as they could be."
The second problem they worry about is that they believe that researchers don't look widely enough at other related research in making their predictions about the possible outcome once they begin human trials.
They believe that most researchers, when predicting the outcome of their study, only consider other studies involving the particular agent, or drug, they are testing. The researchers don't look at agents that might be different but that work on the same pathway, they argue. Mr. London and Mr. Kimmelman refer to this process as "evidential conservatism."
Howard Mann, a program associate in the Division of Medical Ethics at the University of Utah School of Medicine, said this point made by Mr. London and Mr. Kimmelman is troubling.
"What should be alarming is the persistent conduct of pre-clinical research that will not yield clinically relevant information because researchers are not aware of, or have not made sufficient attempts to identify, the negative results of prior completed relevant research," Mr. Mann said in an e-mail statement about the article.
As an example of this, the authors presented two tables with their article: One shows the negative outcomes of eight randomized trials of anti-amyloid drug candidates for Alzheimer's disease, similar to Eli Lilly's semagacestat; and the other shows the negative outcomes of seven randomized trials for neuroprotective agents and/or transplanted tissue strategies to combat Parkinson's disease.
"If consistently neuroprotective strategies have failed, we're suggesting that people should ratchet down their expectations," Mr. Kimmelman said.
Researchers might be doing that on their own, Mr. London said, but they should also include their findings, and lower expectations, in their proposals to the committees that have to review the results of an animal trial before allowing it to move on to human trials.
Including such information in proposals could result in some research not being funded, Mr. Kimmelman acknowledged. But it could also push researchers to "come back and explain how their work is going to overcome previous results."
Ultimately, Mr. London said: "We want to make sure we design our trials as well as possible so we can learn from them -- even if it is a failure in outcome.
"It's not that we learn from every failure, but it's that when they are well designed, we can learn much more from them."
Source Publication: Pittsburgh Post-Gazette
No comments:
Post a Comment