Sunday, March 24, 2013

New Patents Aim to Reduce Placebo Effect

The pharma industry has a big problem on its hands: Placebos are getting to be way too effective. Something needs to be done. But what? What can you do about placebo response? The old saying "It is what it is" would seem to hold true in this case.

One answer: come up with low-placebo-response study designs, and patent them if possible. (And yes, it is possible. But we're getting ahead of the story.) 

Placebo effect has always been a problem for drug companies, but it's especially a problem for low-efficacy drugs (psych meds, in particular). An example of the problem is provided by Eli Lilly. In a March 29, 2009 press release announcing the failure of Phase II trials involving a new atypical antipsychotic known as LY2140023 monohydrate or mGlu2/3, Lilly said:
In Study HBBI, neither LY2140023 monohydrate, nor the comparator molecule olanzapine [Zyprexa], known to be more effective than placebo, separated from placebo. In this particular study, Lilly observed a greater-than-expected placebo response, which was approximately double that historically seen in schizophrenia clinical trials. [emphasis added]
Fast-forward to August 2012: Lilly throws in the towel on mGlu2/3. According to a report in Genetic Engineering and Technology News, "Independent futility analysis concluded H8Y-MC-HBBN, the second of Lilly's two pivotal Phase III studies, was unlikely to be positive in its primary efficacy endpoint if enrolled to completion."

Lilly is not alone. Rexahn Pharmaceuticals, in November 2011, issued a press release about disappointing Phase IIb trials of a new antidepressant, Serdaxin, saying: "Results from the study did not demonstrate Serdaxin’s efficacy compared to placebo measured by the Montgomery-Asberg Depression Rating Scale (MADRS). All groups showed an approximate 14 point improvement in the protocol defined primary endpoint of MADRS."

In March 2012, AstraZeneca threw in the towel on an adjunctive antidepressant, TC-5214, after the drug failed to beat placebo in Phase III trials. A news account put the cost of the failure at half a billion dollars.

In December 2011, shares of BioSante Pharmaceutical Inc. slid 77% in a single session after the company's experimental gel for promoting libido in postmenopausal women failed to perform well against placebo in late-stage trials.

The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials. (For evidence on increasing placebo effectiveness, see yesterday's post, where I showed a graph of placebo efficacy in antidepressant trials over a 20-year period.)

Some idea of the desperation felt by drug companies can be glimpsed in this slideshow (alternate link here) by Anastasia Ivanova of the Department of Biostatistics, UNC at Chapel Hill, which discusses tactics for mitigating high placebo response. The Final Solution? Something called The Sequential Parallel Comparison Design.

SPCD is a cascading (multi-phase) protocol design. In the canonical two-phase version, you start with a larger-than-usual group of placebo subjects relative to non-placebo subjects. In phase one, you run the trial as usual, but at the end, placebo non-responders are randomized into a second phase of the study (which, like the first phase, uses a placebo control arm and a study arm). SPCD differs from the usual "placebo run-in" design in that it doesn't actually eliminate placebo responders from the overall study. Instead, it keeps their results, so that when the phase-two placebo group's data are added in, they effectively dilute the higher phase-one placebo results. The assumption, of course, is that placebo non-responders will be non-responsive to placebo in phase two after having been identified as non-responders in phase one. In industry argot, there will be carry-over of (non)effect from placebo phase one to placebo phase two.


This bit of chicanery (I don't know what else to call it) seems pointless until you do the math. The Ivanova slideshow explains it in some detail, but basically, if you optimize the ratio of placebo to study-arm subjects properly, you end up increasing the overall power of the study while keeping placebo response minimized. This translates to big bucks for pharma companies, who strive mightily to keep the cost of drug trials down by enrolling only as many subjects as might be needed to give the study the desired power. In other words, maximizing study power per enrollee is key. And SPCD does that.

SPCD was first introduced in the literature in a paper by Fava et al., Psychother Psychosom. 2003 May-Jun;72(3):115-27, with the interesting title "The problem of the placebo response in clinical trials for psychiatric disorders: culprits, possible remedies, and a novel study design approach." The title is interesting in that it paints placebo response as an evil (complete with cuplrits). In this paper, Maurizio Fava and his colleagues point to possible causes of increasing placebo response that have been considered by others ("diagnostic misclassification, issues concerning inclusion/exclusion criteria, outcome measures' lack of sensitivity to change, measurement errors, poor quality of data entry and verification, waxing and waning of the natural course of illness, regression toward the mean phenomenon, patient and clinician expectations about the trial, study design issues, non-specific therapeutic effects, and high attrition"), glossing over the most obvious possibility, which is that paid research subjects (for-hire "volunteers"), who are desperate, in many cases, to obtain free medical care, are only too willing to tell researchers whatever they want to hear about whatever useless palliative is given them. But then Fava and his coauthors make the baffling statement: "Thus far, there has been no attempt to develop new study designs aimed at reducing the placebo effect." They go on to present SPCD as a more or less revolutionary advance in the quest to quelch placebo effect.

Up until this point in science, I don't think there had ever been any discussion, in a scientific paper, of a need to attack placebo effect as something bothersome, something that interferes with scientific progress, something that needs to be guarded against vigilantly like Swine Flu. The whole idea that placebo effect is getting in the way of producing meaningful results is repugnant, I think, to anyone with scientific training.

What's even more repugnant, however, is that Fava's group didn't stop with a mere paper in Psychotherapy and Psychosomatics. They went on to apply for, and obtain, U.S. patents on SPCD (on behalf of The General Hospital Corporation of Boston). The relevant U.S. patent numbers are 7,647,235; 7,840,419; 7,983,936; 8,145,504; 8,145,505, and 8,219,41, the most recent of which was granted July 2012. You can look them up on Google Patents.

The patents begin with the statement: "A method and system for performing a clinical trial having a reduced placebo effect is disclosed." Incredibly, the whole point of the invention is to mitigate (if not actually defeat) the placebo effect. I don't know if anybody else sees this as disturbing. To me it's repulsive.

If you're interested in licensing the patents, RCT Logic will be happy to talk to you about it. Download their white paper and slides. Or just visit the website.

Have antidepressants and other drugs now become so miserably ineffective, so hopelessly useless in clinical trials, that we need to redesign our scientific protocols in such a way as to defeat placebo effect? Are we now to view placebo effect as something that needs to be made to go away by protocol-fudging? If so, it puts us in a new scientific era indeed.

But that's where we are, apparently. Welcome to the new world of wonder drugs. And pass the Tic-Tacs.

15 comments:

  1. The statisticians involved, at least, ought to know better. 1.They are assuming that placebo response is genuine and not just regression to the mean (the latter is usually grossly underestimated). 2. they are assuming that it is reproducible and not transient. 3. they are assuming that the FDA is asleep. If the Agency is not dozing they will make them market the drug as being only suitable for those who have a proven inability to respond to placebo. That should sort them out.

    ReplyDelete
  2. Anonymous8:53 PM

    Could you expand please, on why it is repugnant to attempt to lower the placebo response? I've heard this before, and I'm genuinely interested as to why this is a problem. I see it as a signal-detection issue where solving it may have consequences for generalizability etc. I don't really see it as a moral issue, but there's every chance I'm missing something?

    As an aside, it strikes me that best design to test a new drug would be one where *no-one* knows they are receiving it. Completely unethical of course, but we could be guaranteed that any observed effect (positive and negative) was actually due to biologically induced change - rather than expectation, demand characteristics etc. Thus, eliminating the placebo effect would be extremely useful here, if we care about knowing the true harms and benefits of the drug in question.

    P.s., I'm not psych meds qualify as low-efficacy treatments. http://www.ncbi.nlm.nih.gov/pubmed/22297588

    Paul

    ReplyDelete
  3. Paul, if you take the point of view that the practical question is to see how a drug improves the situation beyond what would normally happen then it seems appropriate to have as a control what would normally happen. What is labelled placebo response is in any case in most cases just regression to the mean. I regard all this placebo response stuff as just so much junk science. See http://eprints.gla.ac.uk/8107/1/id8107.pdf and also
    Kienle, G. S. and H. Kiene (1997). "The powerful placebo effect: fact or fiction?" Journal of Clinical Epidemiology 50(12): 1311-1318.

    ReplyDelete
  4. Anonymous7:58 AM

    Stephen, thanks for your reply and the links. I agree with your point about pragmatism but I also see the value of a carefully controlled assessment of efficacy & harms - not least to avoid Type I error for the former and Type II for the latter. I think there must be room for both approaches...

    P.s are you the Stephen Senn of CROS analysis fame?

    Paul

    ReplyDelete
  5. I also am failing to fully grasp the ethical problems of this study design, and of attempting to reduce the placebo response in RCTs in general. Presumably, reductions in placebo response would happen across all arms of the trial equally, which would provide a clearer picture of the specific chemical activity of the drug.

    The major problem with the design seems to be -- as Stephen points out in the first comment -- that it probably won't reduce the total placebo effect size much. If placebo responders were likely to remain consistent in their response, then we'd already have a simple solution in the placebo run-in period. The problem being, of course, that there's really no evidence that run-in periods do what they're supposed to do.

    I actually wrote a brief blog post about the SPCD a couple years ago (here). Then, I asked essentially the same question as now: what evidence to we have that SPCD reduces placebo response compared to more traditional methods?

    (and a second question for the author: if SPCD is an unethical design, then wouldn't a design featuring a placebo run-in -- which most trials use -- also be unethical?)

    ReplyDelete
  6. Anonymous12:52 PM

    The problem is that there are some people in both treatment and control groups who are placebo-responders -- they will respond no matter what you give them. Responders therefore have to be kept in the control group as the proper baseline for comparison. If they're left out in any way, you're comparing apples and oranges -- a treatment group that includes some unknown placebo responders (whose response to the drug is therefore inflated) is being compared to a control group that excludes placebo responders. Right?

    ReplyDelete
  7. I think the issues here are a little more complicated than presented by the author.

    We have a simple observation, that drugs that robustly separated from placebo 20 years ago no longer reliably do so. Did they become less active? Probably not. Are placebos working better than they used to? Probably, at least within the clinical trial setting. Assuming that is the case, what should we do?

    1)We could remove all these drugs from the market, and simply give patients placebos. But its not really clear whether the placebo effect would continue to operate once word got out that doctors were routinely handing out placebo. In fact once study found that the magnitude of the placebo effect is proportional to the likelihood of receiving active drug.

    2)We could leave the old drugs on the market and not approve the new ones. However the new drugs appear to be equally efficacious to the old ones, and in many cases (e.g., lurasidone) have better side effect profiles. This would seem not to serve the interests of patients either.

    3)We could attempt to make clinical trials look more like real life treatment, in which the choices are not between drug and placebo but between drug and no treatment. But this would lead to drugs with no intrinsic efficacy being approved as a matter of course.

    4)We can try to understand the placebo effect, and engineer it out of trials so that we know that approved drugs have greater efficacy that placebo under at least some test conditions.

    I would say that choice #4 is probably the best of those that I can come up with. For all the insinuations in the article about how pharma is trying to pull the wool over everyone’s eyes, it seems that given the available choices, the industry is pursuing the only one that really makes any sense.

    Obviously we would all like to have more efficacious drugs whose effects are so robust that the issue of establishing separation from placebo never arises. If the author has any ideas on how such compounds might be produced, I’m sure there are venture capitalists who would be strongly interested to hear them.

    ReplyDelete
  8. Anonymous7:40 AM

    Jim, very interesting points.

    Something that doesn't seem to be getting enough consideration from the industry is that improvements in trial methodology (governance, oversight, pre-registration, CONSORT etc) has reduced risk of bias, both in relation to trial conduct and reporting, and that this might account for some of the increased placebo effect. I wonder what your view on this is?

    If I am right, then it follows that older trials overestimated efficacy.

    Paul

    ReplyDelete
  9. Anonymous7:39 AM

    to Jim: you are assuming that the only option is to treat patients with drugs(be it placebo or "real" effective drugs). However, at least in case of depression, psychotherapy can be highly effective: take cognitive behavioural therapy as good example. There are even options for self-therapy(look at the excellent book by CBT therapist David Burns "Feeling Good: the new mood therapy").

    So no, the industry isn't "pursuing the only one that really makes any sense."

    ReplyDelete
  10. Anonymous9:39 AM

    I remember having read a very enlightening piece about why the old drugs don't work these days: they never did, and the reason is really simple: the industry would only publish studies that showed a positive effect and hide those that showed a negative effect. Bad study design and reporting bias does the rest.

    You always get studies with a positive outcome, even if the drug itself does not work.

    So any study design that tries to lower placebo response is a great idea for the industry, because that is a cost-effective way of creating more studies with positive outcome, independently of the efficay of the drug.

    Now tell me, what's not to love about that?

    Bernard

    ReplyDelete
  11. Statistically, I can see arguments both ways.

    Pro SPCD:
    If we assume that a psychiatric diagnosis describes a symptom rather than an underlying cause, and that placebo is efficient at counteracting one cause and the medication efficient at counteracting another, and both are equally strong and independent of each other, then we would get the same number of positive responses for each medication, yet it would still make sense to have that medication. The SPCD would in this case show that the medication is efficient for causes where the placebo isn't. An ineffective medication that only works through the placebo effect should still fail SPCD studies. (An additional argument pro SPCD is that due to the low efficacy of psychiatric drugs, practitioners test patients on one drug, and if it doesn't work, switch them to another one. SPCD mimics this practice by switching patients on whom the placebo didn't work to the medication being tested. Thus it could be said to be more relevant to actual medical practice.)

    Contra SPCD:
    The above argument assumes that the placebo effect works the same, no matter if it's the medication or the actual placebo that causes it. However, that may not be true. In that case, the SPCD may simply funnel a higher percentage of subjects into the second phase who are able to detect placebos (e.g. through missing side effects). For these people, the placebo effect of the real drug may be stronger, perhaps because they're noticing side effects that weren't present in the previous placebo trial, so they (correctly) conclude they're now given "the real thing", making the placebo effect work for them.

    ReplyDelete
  12. Anonymous11:09 AM

    Didn't you jump the gun with this article, by about a week?

    "The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials."

    Hahahaa!!! But wait, the date says March 24!

    ReplyDelete
  13. Thank you for sharing. The ViS platform enables clinical research optimization.

    ReplyDelete
  14. Why it has to be reduced if it is getting to its way in being effective at all?

    ReplyDelete
  15. Outstanding efforts for making this blog!! Your writers and your work are really appreciative.
    Buy Follistatin

    ReplyDelete

Please add your comment here!