The Gross Distortion Of "Scientifically Validated" Mental Health Care

Koreans have a folk saying that translates, "The navel is bigger than the belly." The expression applies when someone loses a proper sense of proportion - for instance, something incidental, or instrumental, becomes so overdone that it overwhelms whatever it's supposed to serve.

In mental health care, our job is to provide care for certain sorts of suffering. The sciences help, or can. When we think of "being scientific" as definitive of our work, the navel is apt to grow bigger than the belly.

Most of us most of the time exaggerate the role and importance of science in mental health care. The leaders of the professions not only indulge but demand such exaggeration, most notably when arguing publicly for more money, greater power, and more prestige for their trades.

Once we believe that our "scientific base" gives us legitimacy and authority, we're apt to believe (and convince the public) that without a scientific base, what we do lacks legitimacy.

That ramps up the urgency with which we claim to be scientific. So we claim to have more science than we have, we claim that it proves things that it does not, we claim that it answers questions that simply are not scientific questions.

And some of us even condemn as "unethical" and dangerous clinicians who do not share our particular - and often peculiar - notions of what counts as good science.

Meanwhile, we overlook the actual sources of our beliefs, we ignore vast intellectual resources we could use to good effect, and we give bad answers with undue confidence.

That is one gigantic navel.

The issue can be illustrated by a controversy raging around the draft version of DSM-V, which makes drastic changes in DSM-IV, purportedly on the basis of science.

Robert Spitzer, editor of DSM-III, and Allen Frances, editor of DSM-IV, have been highly critical of the new edition. When DSM-III was developed, Spitzer and company made a calculated compromise between established clinical practice and more modern notions of scientific investigation. Clinical practice, it was held, had developed a body of beliefs that deserved respect, even if it did not have the modern scientific basis claimed for newer diagnoses. DSM-IV continued this balancing act.

Frances, in particular, makes the argument that where established practice has found a category to be useful, we should relinquish it only if there is scientific evidence that it is wrong. The editors of DSM-V, to the contrary, argue that unless something has a scientific basis, it should be discarded.

Whatever we may think of DSM-III and IV, they illustrate the view I am espousing here: our job is to provide care, and currently fashionable notions of science hardly exhaust our ways of accumulating useful beliefs. DSM-V, on the other hand, makes science (more precisely, its editors' highly debatable contentions about current science) more important than any other form of learning how to provide care.

Maybe tens of thousands of clinicians have treated millions of patients for disorders that do not exist. That's certainly possible. But every study of mental health outcomes has shown that most people who get care get better. That would certainly seem to suggest that care provided without benefit of scientific validation has something to commend it. And that argues that science is not definitive of good care.

We can establish this point by looking at the media's (and APA leadership's) darling method, cognitive behavioral therapy. Cognitive behavioral therapy (CBT), despite its own lofty rhetoric about its "scientific validation," proves that effective care does not depend on good science.

We all know that cognitive behavioral therapy claims to be the most "scientifically validated psychotherapy," on the basis of randomized clinical trials that test for "treatment outcomes." I'll show, in later blogs, that RCT's show considerably less than CBT'ers claim. But there's no disputing that, in many circumstances, cognitive behavioral therapy is an effective way to provide care.

But as a matter of plain fact, the central psychological tenets of CBT - for instance, that thoughts cause emotions, that cognitive distortions cause psychopathology, that minimizing, maximizing, and catastrophizing are pathogenic - are not "scientifically validated."

In fact, very strong sciences maintain, and can reasonably be said to show, that those assertions are simply false.

The overwhelming majority of research in affective neuroscience is done on animals, who clearly lack the neurology for high-level cognitive functions. (Dolphins, whom most people who know them well would claim have emotions, do not even have neocortex.) I do not know any major research program in affective neuroscience, or other sciences that study emotion, that would claim that emotions depend on high-level cognitive functions.

And cognitive distortion is not pathogenic-it is completely normal. Rationality is rare. The work of Kahneman and Tversky (for which Kahneman won the Nobel prize) established beyond doubt that our most common habits of thought are wildly irrational. (Dozens of books have popularized this work in the last few years - for instance, Bozo Sapiens and Predictably Irrational.)

Minimizing and maximizing are basic functions of maintaining a coherent belief system, as psychologists have known since Jerome Bruner's work over fifty years ago.

Catastrophizing is an essential human virtue - shown when we buckle our seat belts, when we stop buying a particular medicine because a miniscule fraction of one percent of its users suffer harm, or when we regulate food safety on the basis of outbreaks of illness that affect less than one one-millionth of Americans.

So if we accept CBT's claim to have shown itself effective, we must also accept that beliefs need not be scientifically validated to provide a basis for effective care, since its beliefs are certainly not scientifically validated.

That does not mean we should be happy with false beliefs - or with systems of psychotherapy that teach patients falsehoods. But it proves pretty well that equating effective care with science is misguided.

Science matters - and a well-proportioned navel can be very attractive. But we really need to think much more carefully about just how science can aid care, and how the insistence that only science can provide useful knowledge warps our thinking and our practice.

Comments
  • Marshall Couper

    Bob, how refreshing to read an article by someone prepared to challenge the status quo and have us think about such an important issue from a new perspective. This is a question I ponder every day in my position as CEO of an Australian mental health software company. Consider the following example of how the naval has become bigger than the belly in the mental health care space:

    Global Mind Screen Group has a DSM-IV based mental health assessment tool called the Mind Screen. It is a self assessment tool covering 31 disorders that a patient completes online. The patient can't access their assessment results without visiting their mental health practitioner. It was initially developed to assist time-pressed family physicians to identify their patients' mental health issues early and then treat or refer as appropriate. It is well known, due to the many significant systemic barriers to quality mental health outcomes in primary care (such as a lack of time/training/equivalent reimbursement) that many family physicians are misdiagnosing or missing diagnoses. Many Most primary care practitioners have lost confidence in their own ability to assess mental health issues and so often avoid the issues in a patient consultation. Certainly few have the time or inclination to do a comprehensive assessment of primary and secondary comorbid conditions and many patients who should be referred to specialists, simply are not.

    Now here's the interesting point. While there are many early adopters of the innovative Mind Screen approach to mental health assessment, there are also a lot of practitioners who are waiting for the Mind Screen to be 'validated' against another outcome tool, such as the SCID or CIDI which are 'supposedly vaildated' themselves. What I find interesting, but not surprising, is these same slow adopters who are unwilling to use the Mind Screen until it is 'validated' are generally the same practitioners who currently use the K10 alone or spend just a few minutes with a patient asking a few questions off the top of their head before making a diagnosis (not necessarily the correct diagnosis) and putting them on medication! Not to put down the K10, but it is a single disorder assessment tool and practitioners are relying upon it to pick up more than just depression. The K10 does not assess PTSD, or OCD, or social phobia etc. So, these late adopters, who rely on single disorder tools or a few questions in a time pressed consultation are generally the practitioners that would benefit the most by offering the Mind Screen to their patients. While their attitude is understandable - afterall a Dr's many years of training are based on scientific validation of symptoms (x-rays, blood tests, CT scans etc) - it is not necessarily acceptable. Too many patients have poor health outcomes as a result of practitioners who believe validation is a requirement in mental health. This is simply not the case for mental health care, where diagnosis is an art form that requires time with a patient, training and experience, and subsequent improvement in the patient's wellbeing is generally highly correllated to how early their condition is identified and acted upon - again, it's all about early intervention!

    Global Mind Screen's priority is ensuring as many patients as possible have access to high quality assessment. So despite our views mirroring those expressed in Bob's article, we are taking the time and spending the money to 'validate' the Mind Screen against the SCID in a Harvard study. Doing so will hopefully then provide some comfort to the slow adopters of this innovation in mental health assessment. For those who are delaying the adoption of the Mind Screen in their practice because of the question of validation, I urge you to reconsider based on the many salient points made in Bob Fancher's article.

    For the early adopters, take pride in your rational common sense approach to the question of 'validation' since based on unsolicited feedback we've had from many of you, your foresight is leading to better outcomes for your patients and in many cases has already lead to lives saved.

    Thank you Bob for your inspiring and thought provoking article.

    (The Mind Screen will soon be available in the US subject to capital raising. For more information contact marshall@globalmindscreen.com.au).

  • Cindy Weiss

    Thank you for your common sense and willingness to challenge what has become the obsession of our society in general, and of academicians and politicians in particular. That which has true meaning - like relationships and emotional healing - can hardly be quantified.