I mean, duh! See my commentary, below.
Hysterical woman screaming. Albert Londe, c. 1890. / Courtesy of Wellcome Library, London. Photo Credit: Wikimedia
June 4, 2013 |
by Ethan Watters
Imagine for a moment that the American Psychiatric Associationwas about to compile a new edition of its Diagnostic and Statistical Manual of Mental Disorders. But instead of 2013, imagine, just for fun, that the year is 1880.
Transported to the world of the late 19th century, the psychiatric body would have virtually no choice but to include hysteria in the pages of its new volume. Women by the tens of thousands, after all, displayed the distinctive signs: convulsive fits, facial tics, spinal irritation, sensitivity to touch, and leg paralysis. Not a doctor in the Western world at the time would have failed to recognize the presentation. “The illness of our age is hysteria,” a French journalist wrote. “Everywhere one rubs elbows with it.”
Hysteria would have had to be included in our hypothetical 1880 DSM for the exact same reasons that attention deficit hyperactivity disorder is included in the just-released DSM-5. The disorder clearly existed in a population and could be reliably distinguished, by experts and clinicians, from other constellations of symptoms. There were no reliable medical tests to distinguish hysteria from other illnesses then; the same is true of the disorders listed in the DSM-5 today. Practically speaking, the criteria by which something is declared a mental illness are virtually the same now as they were over a hundred years ago.
The DSM determines which mental disorders are worthy of insurance reimbursement, legal standing, and serious discussion in American life. That its diagnoses are not more scientific is, according to several prominent critics, a scandal. In a major blow to the APA’s dominance over mental-health diagnoses, Thomas R. Insel, director of the National Institute of Mental Health, recently declared that his organization would no longer rely on the DSM as a guide to funding research. “The weakness is its lack of validity,” he wrote. “Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever.” As an alternative, Insel called for the creation of a new, rival classification system based on genetics, brain imaging, and cognitive science.
This idea—that we might be able to strip away all subjectivity from the diagnosis of mental illness and render psychiatry truly scientific—is intuitively appealing. But there are a couple of problems with it. The first is that the science simply isn’t there yet. A functional neuroscientific understanding of mental suffering is years, perhaps generations, away from our grasp. What are clinicians and patients to do until then? But the second, more telling problem with Insel’s approach lies in its assumption that it is even possible to strip culture from the study of mental illness. Indeed, from where I sit, the trouble with the DSM— both this one and previous editions—is not so much that it is insufficiently grounded in biology, but that it ignores the inescapable relationship between social cues and the shifting manifestations of mental illness.
Psychiatry tends not to learn from its past. With each new generation, psychiatric healers dismiss the enthusiasms of their predecessors by pointing out the unscientific biases and cultural trends on which their theories were based. Looking back at hysteria, we can see now that 19th-century doctors were operating amidst fanciful beliefs about female anatomy, an assumption of feminine weakness, and the Victorian-era weirdness surrounding female sexuality. And good riddance to bad old ideas. But the more important point to take away is this: There is little doubt that the symptoms expressed by those thousands of women were real.
The resounding lesson of the history of mental illness is that psychiatric theories and diagnostic categories shape the symptoms of patients. “As doctors’ own ideas about what constitutes ‘real’ dis-ease change from time to time,” writes the medical historian Edward Shorter, “the symptoms that patients present will change as well.”
This is not to say that psychiatry wantonly creates sick people where there are none, as many critics fear the new DSM-5 will do. Allen Frances—a psychiatrist who, as it happens, was in charge of compiling the previous DSM, the DSM-IV—predicts in his new book, Saving Normal, that the DSM-5 will “mislabel normal people, promote diagnostic inflation, and encourage inappropriate medication use.” Big Pharma, he says, is intent on ironing out all psychological diversity to create a “human monoculture,” and the DSM-5 will facilitate that mission. In Frances’ dystopian post-DSM-5 future, there will be a psychoactive pill for every occasion, a diagnosis for every inconvenient feeling: “Disruptive mood dysregulation disorder” will turn temper tantrums into a mental illness and encourage a broadened use of antipsychotic drugs; new language describing attention deficit disorder that expands the diagnostic focus to adults will prompt a dramatic rise in the prescription of stimulants like Adderall and Ritalin; the removal of the bereavement exclusion from the diagnosis of major depressive disorder will stigmatize the human process of grieving. The list goes on.
In 2005, a large study suggested that 46 percent of Americans will receive a mental-health diagnosis at some point in their lifetimes. Critics like Frances suggest that, with the new categories and loosened criteria in the DSM-5, the percentage of Americans thinking of themselves as mentally ill will rise far above that mark.
But recent history doesn’t support these fears. In 1994 the DSM-IV—the edition Frances oversaw—launched several new diagnostic categories that became hugely popular among clinicians and the public (bipolar II, attention deficit hyperactivity disorder, and social phobia, to name a few), but the number of people receiving a mental-health diagnosis did not go up between 1994 and 2005. In fact, as psychologist Gary Greenberg, author of The Book of Woe, recently pointed out to me, the prevalence of mental health diagnoses actually went down slightly. This suggests that the declarations of the APA don’t have the power to create legions of mentally ill people by fiat, but rather that the number of people who struggle with their own minds stays somewhat constant.
What changes, it seems, is that they get categorized differently depending on the cultural landscape of the moment. Those walking worried who would have accepted the ubiquitous label of “anxiety” in the 1970s would accept the label of depression that rose to prominence in the late 1980s and the 1990s, and many in the same group might today think of themselves as having social anxiety disorder or ADHD.
Viewed over history, mental health symptoms begin to look less like immutable biological facts and more like a kind of language. Someone in need of communicating his or her inchoate psychological pain has a limited vocabulary of symptoms to choose from. From a distance, we can see how the flawed certainties of Victorian-era healers created a sense of inevitability around the symptoms of hysteria. There is no reason to believe that the same isn’t happening today. Healers have theories about how the mind functions and then discover the symptoms that conform to those theories. Because patients usually seek help when they are in need of guidance about the workings of their minds, they are uniquely susceptible to being influenced by the psychiatric certainties of the moment. There is really no getting around this dynamic. Even Insel’s supposedly objective laboratory scientists would, no doubt, inadvertently define which symptoms our troubled minds gravitate toward. The human unconscious is adept at speaking the language of distress that will be understood.
Why do psychiatric diagnoses fade away only to be replaced by something new? The demise of hysteria may hold a clue. In the early part of the 20th century, the distinctive presentation of the disorder began to blur and then disappear. The symptoms began to lose their punch. In France this was called la petite hysterie. One doctor described patients who would “content themselves with a few gesticulatory movements, with a few spasms.” Hysteria had begun to suffer from a kind of diagnostic overload. By 1930s or so, the dramatic and unmistakable symptoms of hysteria were vanishing from the cultural landscape because they were no longer recognized as a clear communication of psychological suffering by a new generation of women and their healers.
It is true that the DSM has a great deal of influence in modern America, but it may be more of a scapegoat than a villain. It is certainly not the only force at play in determining which symptoms become culturally salient. As Frances suggests, the marketing efforts of Big Pharma on TV and elsewhere have a huge influence over which diagnoses become fashionable. Some commentators have noted that shifts in diagnostic trends seem uncannily timed to coincide with the term lengths of the patents that pharmaceutical companies hold on drugs. Is it a coincidence that the diagnosis of anxiety diminished as the patents on tranquilizers ran out? Or that the diagnosis of depression rose as drug companies landed new exclusive rights to sell various antidepressants? Consider for a moment that the diagnosis of depression didn’t become popular in Japan until Glaxo-SmithKlein got approval to market Paxil in the country.
Journalists play a role as well: We love to broadcast new mental-health epidemics. The dramatic rise of bulimia in the United Kingdom neatly coincided with the media frenzy surrounding the rumors and subsequent revelation that Princess Di suffered from the condition. Similarly, an American form of anorexia hit Hong Kong in the mid-1990s just after a wave of local media coverage brought attention to the disorder.
The trick is not to scrub culture from the study of mental illness but to understand how the unconscious takes cues from its social settings. This knowledge won’t make mental illnesses vanish (Americans, for some reason, find it particularly difficult to grasp that mental illnesses are absolutely real and culturally shaped at the same time). But it might discourage healers from leaping from one trendy diagnosis to the next. As things stand, we have little defense against such enthusiasms. “We are always just one blockbuster movie and some weekend therapist’s workshops away from a new fad,” Frances writes. “Look for another epidemic beginning in a decade or two as a new generation of therapists forgets the lessons of the past.” Given all the players stirring these cultural currents, I’d make a sizable bet that we won’t have to wait nearly that long.
A.K.: I remember, back in the ’60s, when I first realized, probably reading R.D. Laing and David Cooper, that sanity is a social construct (which means, by definition, that so is insanity). If you’re “in” then you’re sane, if you’re “out” then you’re insane.
That was during a tumultuous year when I realized that what some of my professors were calling, behind my back, a “mental breakdown,” was, in actuality, a “mental breakthrough.”
Once you get wind of the way society classifies behavior as in or out (of fashion, or norms of acceptability), and how our familial and social conditioning tends to fashion a sort of hard, internal helmet that’s not supposed to leak or admit light, it’s hard to take psychiatry seriously. In fact, I can’t.
This morning, I googled “Sanity is a Social Construct” and got this:
Coming to the fore in the 1960s, ‘Anti- Psychiatry” (a term first used by David Cooper in 1967) defined a movement that vocally challenged the fundamental claims and practices of mainstream Psychiatry. Psychiatrist Szasz argues that ‘mental illness’ is an inherently incoherent combination of a medical and a psychological concept, but popular because it legitimises the use of psychiatric force to control and limit deviance from societal norms.
Michel Foucault argued that the concepts of sanity and insanity were social constructs that did not reflect quantifiable patterns of human behaviour but which, rather, were indicative only of the power of the ‘sane’ over the ‘insane’. The novel One Flew Over the Cuckoo’s Nest became a bestseller, resonating with public concern about involuntary medication, lobotomy and electroshock procedures used to control patients.
And this: No need to watch all the way through, just to realize that the idea of insanity has been turned into a silly dance.