[This piece was co-authored by Vincent Geloso and originally published at the American Institute for Economic Research.]
Ramming an icepick through someone’s eyelid to remove a part of their brain sounds like a horrifying method of torture. However, this procedure, named the lobotomy, was a common method to treat mental illness in the United States for nearly 40 years. From 1936 until 1972, nearly 60,000 people were lobotomized. Most lobotomies were performed without the patient’s or their legal caretaker’s consent.
Unsurprisingly, the procedure was a spectacular failure. After surgery, patients often found themselves paranoid, emotionally volatile, incontinent, and with severely impaired intelligence. Surgical complications often left patients unable to function independently, requiring constant supervision and caretaking. When a patient was released from the asylum after being lobotomized, they typically found themselves returning within a few months. Upon their return, they often underwent a second (or, in one case, fourth) lobotomy.
The lobotomy has been described as “one of the most spectacular failures in the history of medicine.” But unlike many historic medical practices which seem barbaric and detrimental only in hindsight, the lobotomy was scorned and dismissed by medical professionals when it became most popular. By 1941, the American Medical Association denounced the lobotomy as ineffective. Shortly after, a world-wide consensus developed along the same lines. However, the procedure continued to grow in popularity, eventually reaching a “lobotomy boom” in the mid-1940s and early-1950s.
But why did the lobotomy become popular, and why was it used for so long after the medical community came out against it? In our paper forthcoming at Research Policy we argue the answer comes down to incentives.
In the early 1900s, a large public health movement led by politically-connected physicians called on the federal government to increase funding for public mental asylums. The movement was successful. Through the 1940s, public asylums and psychiatric hospitals gained additional federal funding based on the number of committed patients they housed. At the state level, physicians lobbied for less stringent commitment laws, reducing the legal requirements to have citizens involuntarily committed to asylums. Accordingly, commitment rates skyrocketed and those managing asylums, often called superintendents, received considerably more funding.
However, the staff-to-patient ratios within asylums did not keep up. By 1949, the National Bureau of Mental Hygiene estimated there was one asylum employee per 21 patients. Medical historians estimate committed patients often received only 30 minutes of contact with a physician per month.
Consequently, those managing asylums sought low-cost treatment options. The lobotomy provided such an opportunity. Unlike the therapeutic or hydro and shock treatments available (all of which are still used today), the lobotomy was comparatively cheaper and did not take years to complete. It also frequently made difficult patients more docile and easier to manage.
Because superintendents received federal funding based on the number of committed patients rather than offering effective medical care, treating patients was a secondary matter. State agencies funding public asylums also faced little reason to care for the patient’s wellbeing. As we note in our article, many asylums would document the number of lobotomies performed to secure additional funding.
In contrast, private asylums, which also faced overpopulation issues and treated the same patient demographics as public asylums, were funded by philanthropic donors and the patients’ legal caretakers. When patients failed to improve, were mistreated, or not offered sufficient quality of care, an asylum risked its profitability. Accordingly, using erroneous or excessively harmful treatment methods like the lobotomy would be detrimental to their bottom line.
Predictably, private asylums and private practicing psychiatrists were significantly less likely to lobotomize their patients. In 1950, during the peak of the lobotomy’s popularity, only six percent of lobotomies were performed in private practices or asylums. The literature on mental asylums also notes most lobotomies in public asylums occurred within a few months after a patient was committed. Lobotomies performed in private asylums or private practice occurred after years (or decades) of failed treatment and typically at the patient’s request.
The use of the lobotomy in US medical history is a shocking and concerning event which continues to leave many asking how such a thing could happen. By examining the incentives stemming from well-intended but ultimately harmful public health policy, we can provide answers. Furthermore, while the lobotomy continues to offer a cautionary tale of medicine gone astray, we argue it should also provide a cautionary tale of the harmful consequences of state-assisted health policies. The lobotomy is an extreme example, but the reasons it became popular and endured are eerily familiar.