Monday, July 16, 2018

Placebo Effect

Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445-454.

To draw causal conclusions about the efficacy of a psychological intervention, researchers must compare the treatment condition with a baseline or control group that accounts for improvements caused by factors other than the treatment. In pharmacological research, the control group receives a sugar pill (a placebo) that looks identical to the experimental pill, meaning that participants cannot tell whether they are in the experimental condition or the control condition. Because they are blind to their condition assignment, they should not hold different expectations for the effectiveness of the pill, and any difference between the groups on the outcome measure may be attributed to the effect of the treatment...

Participants in psychological interventions typically know which treatment they received. For example, participants undergoing an experimental cognitive therapy for anxiety are aware that they are receiving treatment and are likely to expect to improve as a result. Measuring the effectiveness of this therapy by comparing it with a no-treatment control condition would be inadequate because the two groups would have different expectations for improvement, and few scientists would accept such a comparison as compelling evidence that he ingredients of the therapy were responsible for observed improvements. A better comparison would be with an active control group, one that receives a similar therapy that does not specifically target their anxiety.

Many researchers, reviewers, and editors of psychology interventions apparently believe that including an active control group automatically controls for placebo effects... This failure to control for the con-founding effect of differential expectations is not a minor omission—it is a fundamental design flaw that potentially undermines any causal inference. Absent any measurement of expectations, conclusions about the effectiveness of an intervention, whether the intervention is designed to improve education, mental health, well-being, or perceptual and cognitive abilities, are suspect. We should distrust those conclusions just as we discount findings from a drug study in which participants knew they were getting the treatment.

[The authors make their point by discussing the claim that action video game training enhances enhances perceptual and cognitive abilities.] [U]nlike many other psychology interventions, game-training studies typically include active control conditions that are closely matched to the training condition. Nevertheless, they still do not adequately account for expectation effects. [P]articipants believe that the action-game treatments will produce bigger improvements in visual processing than will the control games....

Take the claim that daily writing improves physical and mental health (see Pennebaker, 1996, for review). In such studies, participants in the experimental group typically write (repeatedly) about personal thoughts and feelings, experienced trauma, or highly emotional issues. In contrast, those in the control condition typically write about trivial topics (e.g., “Describe the outfit you are wearing today in detail” or “Describe the things you do before class on a typical Monday”; Park & Blumberg, 2002). Matching the activity in the experimental and the active control group is laudable, but the two groups presumably differ in their expectations for therapeutic benefits, meaning that any improvements might result from a differential placebo effect...

A Way Forward... 

There are methods to measure and account for the influence of differential expectations and demand characteristics. These include [1] explicitly assessing expectations, [2] carefully choosing outcome measures that are not influenced by differential expectations, and [3] using alternative designs that manipulate and measure expectation effects directly.

Even better than measuring expectations during a study or after the fact would be to choose an active control task or outcome measure on the basis of an independent assessments of expectations. For example, a game- training study could choose an outcome measure that shows no difference in expectations between the action game and control game but that the hypothesis predicts should benefit from action-game training.

"Just a Placebo Effect"?

We have discussed placebo effects largely in terms of expectations influencing the motivation to perform well on an outcome measure (e.g., someone devoting more effort to a memory measure after completing memory training because he or she now expects to perform bet-ter). However, placebo effects can operate in other ways and take many forms (for review, see Benedetti, Mayberg, Wager, Stohler, & Zubieta, 2005; Price, Finniss, & Benedetti, 2008).

Placebos can trigger the release of endogenous opioids and can also reduce pain through nonopioid mechanisms (Montgomery & Kirsch, 1996). Placebo treatments are associated with functional brain changes, including decreased activity in pain-related brain areas (Wager et al., 2004). Placebos also can operate via classical conditioning: If the act of taking medication is associated with a physiological response, an inert  placebo can trigger a similar conditioned response (Stockhorst, Steingrüber, & Scherbaum, 2000). Finally, expectancies can affect memory for previous experiences (Price et al., 1999), biasing self-report and subjective out-come measures in favor of an intervention.

Placebo effects are real and worthy of explanation in their own right, and we do not mean to dismiss their important (and clinically relevant) effects in medical and psychological interventions. However, whenever researchers want to attribute causal potency to the intervention itself, it is incumbent on them to verify that the improvements are not driven by expectations...

How to Assess an Intervention
  1. Is the intervention compared with a control group? If NO: No causal claim merited: Improvements could result from retest, regression to the mean effects, effect of intervening events (history), motivation and expectations (placebo), social contact, etc. If YES, move to #2.
  2. In the control group active? If NO: No causal claim merited: Control accounts for retest, regression to the mean, and history effects. Improvements could result from motivation and expectations (placebo), social contact, etc. If YES, move to #3.
  3. Are expectations between groups equated/equal for each outcome measure? If NO: No causal claim merited: Control accounts for retest, regression to the mean, history effects, and social contact. Improvements could result from differential motivation and expectations (placebo effects). If YES: Causal claim merited: After equating expectations for each outcome measure, differential motivation, expectations and placebo effects are unlikely to explain differential improvements.
* * * * * 


"Participants in placebo groups have displayed changes in heart rate, blood pressure, anxiety levels, pain perception, fatigue, and even brain activity."

No comments:

Post a Comment