The Problems With Evidence-Based Psychotherapy

There has been a tremendous movement toward evidence-based treatment in clinical psychology over the past decade. On its face, this is a good thing — the idea that we should use scientific findings to make sure the types of treatment we’re using in psychotherapy actually work. My own clinical training included a wide array of these empirically based treatments, and I happily use many of their key techniques. I also continue to pay close attention to new clinical research, and frequently review the literature in the course of formulating treatments for my patients.

However, I see some really serious problems with evidence-based psychotherapy, problems that I believe are actually harmful to the patients, and to the credibility of clinical psychology:

The studies use unrealistic exclusion criteria

In an effort to eliminate confounding variables that could interfere with study results, research psychologists try to exclude patients suffering from any ailments outside of the single diagnosis that the treatment protocol is designed for. This is in line with the medical model of psychology, which attempts to isolate discrete psychological disease processes and treat them directly, much the same way that a physician might take a throat culture and then prescribe antibiotics to treat strep throat.

The problem is that psychological issues don’t really work that way. The diagnoses used in clinical psychology are generally not isolated disease processes the way viral or bacterial infections are. Most people don’t present with a single discrete disorder, but rather a spectrum of symptoms, personality characteristics, and social factors which can all be considered a part of the syndrome which has led them to seek treatment. This requires holistic treatment that identifies and addresses the underlying causes.

In many ways, mental illness is similar to metabolic syndrome — another problem for which the medical model is proving inadequate. All day long I see patients with separate diagnoses of diabetes, hypertension, hyperlipidemia, cardiovascular disease, obesity, sleep apnea. Each one of these identified problems are then treated by suppressing the symptoms. For diabetes there are pills to lower blood sugar, for apnea there is the CPAP device, for high blood pressure there are pills to lower the blood pressure. Even the preventive advice that is rendered is symptom-focused: switch to whole grains so the sugar will enter your system more slowly, take less salt to reduce blood volume. But all of the symptoms are actually related. They all result from the same underlying problem of metabolic dysregulation, and until that total syndrome is addressed, all of the symptoms will continue to get worse over time no matter how well they are managed.

Mental illness is like that. There are genetic, social, and cultural predispositions that contribute, but in order for the syndrome to be expressed a disruption in the management of psychological resources — such as emotions, thoughts, internal imagery, relationships, etc. — must take place. A cascade of compensatory actions are then set in motion, resulting in the symptoms we see on the surface: depression, anxiety, nightmares, panic attacks, hallucinations, substance abuse, etc.

I’ve treated substance abuse, and rarely seen a case where depression or anxiety were not also present. I’ve treated PTSD, and it’s the same story. Would you believe that most sufferers of severe, adult-onset, chronic pain are survivors of some form of childhood abuse? Even your basic top-level disorders, anxiety and depression, actually share many symptoms and frequently develop simultaneously. And many patients’ suffering simply doesn’t fit any currently available psychiatric diagnosis.

So what happens when you pick a single disorder, cherry-pick patients who don’t meet criteria for any other problem, and study their response to an intervention? They get better! In fact, they mostly get better even when you just put them on a waiting list. Because these are the easiest, most uncommonly uncomplicated cases. They are not a good representation of the patients that actually come in for treatment. Therefore, the studies are not generalizable; they tell us very little about how well a treatment will work in the real world.

The studies aren’t repeatable

Because there are so many potential confounding factors in research with live humans, you can never read too much in to any one study. Once a study has been repeated a few times, preferably by different researchers under different conditions, then we can start to get a little more confident that we’re seeing a real treatment effect and not just an artifact of some unpredictable condition of the study. But that’s not happening very much in the field of clinical psychology. Most psychotherapy outcome studies are simply never replicated.

Part of the problem is publication bias: for a variety of reasons, papers describing a failure to replicate previous studies don’t get published as often. Many of the findings could not be replicated regardless, because they are false positives — the study designs do not adequately discern between treatment effects, placebo effects, and random effects. There is tremendous pressure on academic researchers to produce positive results, and this leads to both conscious and unconscious bias in the way research is carried out.

Finally, psychological treatments, like medical treatments, are subject to decline effects. Over time, They get less effective. Later studies find lesser effects than earlier studies, even when the research design is sound. This could be due to changes in researcher enthusiasm, changes in the patient populations that the treatments are being tested on, or changes in the cultural context in which treatments are being administered. Regardless, most published research findings are false. It is therefore highly problematic to make those findings the basis of clinical treatment.

Here are some excellent articles about these problems:

The studies often ignore therapist effectiveness & the treatments fail to utilize therapist skill

A large number of controlled and naturalistic studies have found that the therapist providing treatment is more important to the outcome than the type of treatment being provided. This makes a certain kind of natural sense to most people, yet psychologists remain resistant to the idea and the reasons for it have not been adequately studied. We don’t really know what makes the difference between a really great therapist, a mediocre therapist, and an ineffective therapist. Treatment adherence does not explain the difference.

So what do psychological researchers do? Rather than try to isolate the factors that make a therapist great, they try to eliminate them in order to study the effects of specific treatment protocols. The therapists studied must comply with manualized treatments in ways that often contradict their own highly trained clinical instincts, and then statistical methods are used to cover over any remaining differences.

The result is rigid treatment designs that do not provide therapists with information about how to get the greatest clinical effect out of them. In practice, psychotherapy is fluid and must be adapted to the individual client on a moment-by-moment basis. But because these manualized treatments are supposed to be “evidence-based,” clinicians are obligated to go through a process of supervised practice during which they must comply with the manual at the expense of therapeutic effectiveness. Then, once they are certified to administer this evidence-based treatment, they can go back to doing the things they have found actually help people, while adhering only to the basic structure of the manualized treatment. In my experience, clinicians do their best to pick very “easy” and compliant patients as test cases for certification, since these types of patients are the ones who will generally get better no matter what you do with them.

Significant results in studies are not good enough to be considered clinical success

Even if we ignore the methodological problems and take these studies at face value, the results they achieve are generally insufficient to indicate a treatment effect that I would be satisfied with in my own practice. A “statistically significant” effect just means that whatever happened probably didn’t happen by chance. So with this type of result, we know that the treatment had some effect, but we know nothing about how much of an effect it had.

For that, we need to look at effect sizes. Even a “large” effect size (0.8) really only means that the average person in the treatment group ended up better off than 79% of people who didn’t get the treatment. We’re not talking cures here, we’re talking about less than half of the people getting somewhat better than people who got nothing.

Of course, “better” depends on the measures used. In most studies, outcomes are measured using symptom questionnaires that patients are asked to fill out before and after the treatment. The symptoms on the questionnaire will relate only to the specific disorder being treated, so for most studies we never know how well patients ended up in terms of overall quality of life. The broader the disorder, the bigger this problem becomes.

For example, Dialectical Behavior Therapy (DBT) is widely recognized as the best and most empirically supported treatment for Borderline Personality Disorder. And the studies have shown that DBT does, in fact, reduce self-harm and increase treatment compliance in sufferers of borderline personality disorder. They have not, however, shown DBT to cure borderline personality disorder — and not for lack of trying. So here you have a widely promoted intervention that does not bring about a reversal of the diagnosis for which it is prescribed. There are interventions which have been shown to do this, such as Mentalization-Based Treatment. But, for a variety of reasons, that treatment does not seem to be gaining traction in the evidence-based psychology community. As more money gets thrown at DBT studies, the likelihood increases that someone will eventually manage to get a positive result for reversal of borderline personality disorder, and then that finding will be used to solidify DBT as the mandated choice among healthcare systems using evidence-based treatments.

Study funding and treatment implementation tends to be theoretically aligned

Cognitive Behavior Therapy (CBT) is far and away the most widely studied form of psychological treatment. Let’s set aside for a moment the fact that “CBT” has gradually expanded to include the most widely-used interventions from all theoretical frameworks, as well as longer treatment periods for complicated or difficult presentations. It has been shown effective for a wide range of psychiatric disorders, with more added all the time. Based on its impressive base of literature, many researchers, educators, and clinicians assert that it is the most effective form of psychotherapy. The problem is, it’s not.

Studies comparing CBT to other types of therapy have found similar effect sizes between treatments for decades, with psychodynamically-oriented therapies showing more lasting treatment gains. So why is CBT so much better represented in funding and implementation? Essentially, politics. The cognitive-behavioral theoretical camp got a strong foothold at NIMH and held on for dear life. There are more studies showing that CBT is effective simply because there have been more studies funded for this type of treatment.

The same is true with exposure-based therapies for trauma. Therapies which focus on exposure are the most heavily promoted and widely held to have the best empirical base. However, at least five other types of treatment show equal treatment effects, and a couple of them contain no exposure elements whatsoever. There are underlying factors that are getting lost in the politics. As a clinician I have absolutely no interest in the politics — I’m interested in figuring out what works for each patient. From that basis, we may be able to start to extrapolate ingredients that make treatment more generally effective for all patients (this is the idea of practice-based evidence.)

Treatments studied often have poor theoretical integrity

As I mentioned, CBT has been steadily expanding over the years to encompass more techniques, such as relaxation, visualization, and mindfulness meditation. The exploration of core beliefs and schemas has evolved in a direction very similar to modern brief psychodynamic therapies. The only thing that distinguishes CBT from the less- (but still often well-) supported psychotherapies is its theoretical foundation.

However, the same expansion beyond CBT’s theoretical basis has led to the creation of treatment protocols which are essentially eclectic. Researchers have long lamented their difficulty in isolating the specific factors which lead to successful treatment, and these new ostensibly cognitive-behavioral protocols amplify the problem by introducing a technique-driven, kitchen-sink approach that renders a difficult task impossible. Instead of drilling down into the therapies to identify process variables which highly effective therapists incorporate naturally, we are building up theoretically vapid treatments that seem to work about as well as everything else, but target a specific psychiatric disorder (for which there is also little theoretical or empirical integrity.) Are you confused by all this yet?

Symptom-focused EBTs lead to modularization of treatment, increased total treatment time, reduced overall effectiveness

As this newfound glut of grant-funded “evidence-based” psychotherapies are rolled out in large healthcare systems, the result is a bureaucratization of psychotherapy. Patients are shuffled from one empirically supported group to another, and handed off from one therapist to another based on what symptoms each therapist is certified to treat.

For example, a common case is an obese veteran with chronic pain, depression, PTSD, and alcohol dependence. For the obesity, they’ll be referred to a weight loss group. For the chronic pain they’ll be referred for manualized pain management treatment, either in a group or individual format. The alcohol dependence will likely lead to a referral for a substance abuse treatment group, which will be followed by a referral to a manualized “Seeking Safety” group to transition between substance abuse treatment and PTSD treatment. PTSD treatment might be treated with either Cognitive Processing Therapy or Prolonged Exposure Therapy, either of which may lead to some relief from depression, but an additional referral will likely be needed for a depression treatment group. So in this common scenario, the patient is asked to participate in 6 separate 8-12 week, empirically supported treatments, each of which requires orientation into a separate vocabulary and set of expectations, and each of which has about half a chance of producing remission for its particular symptom. This treatment process will average about 60 weeks, most likely with gaps of 1 month or more in between the start of each treatment, meaning that the whole thing will take a year and a half, conservatively. How long do those non-evidence based treatments take again?

Moreover, decades of studies examining the factors related to positive treatment outcomes have shown that one of the most important factors is the therapeutic relationship. The model which I have described, which is currently in use within some major healthcare systems and which is being heavily promoted for implementation everywhere, simply obliterates the therapeutic relationship which would be established over the course of an 18-month, single-provider course of psychotherapy. That relationship is replaced by a series of much shallower treatment relationships with well-meaning therapists, and one major relationship, to the bureaucracy itself.

Leave a Comment