Megan McArdle in this month’s issue of The Atlantic:
Last year’s national debate on health-care legislation tended to dwell on either heart-wrenching anecdotes about costly, unattainable medical treatments, or arcane battles over how many people in the United States lacked insurance. Republicans rarely plumbed the connection between insurance and mortality, presumably because they would look foolish and heartless if they expressed any doubt about health insurance’s benefits. It was politically safer to harp on the potential problems of government interventions—or, in extremis, to point out that more than half the uninsured were either affluent, lacking citizenship, or already eligible for government programs in which they hadn’t bothered to enroll.
Even Democratic politicians made curiously little of the plight of the uninsured. Instead, they focused on cost control, so much so that you might have thought that covering the uninsured was a happy side effect of really throttling back the rate of growth in Medicare spending. When progressive politicians or journalists did address the disadvantages of being uninsured, they often fell back on the same data Klein had used: a 2008 report from the Urban Institute that estimated that about 20,000 people were dying every year for lack of health insurance.
But when you probe that claim, its accuracy is open to question. Even a rough approximation of how many people die because of lack of health insurance is hard to reach. Quite possibly, lack of health insurance has no more impact on your health than lack of flood insurance.
Part of the trouble with reports like the one from the Urban Institute is that they cannot do the kind of thing we do to test drugs or medical procedures: divide people randomly into groups that do and don’t have health insurance, and see which group fares better. Experimental studies like this would be tremendously expensive, and it’s hard to imagine that they’d attract sufficient volunteers. Moreover, they might well violate the ethical standards of doctors who believed they were condemning the uninsured patients to a life nasty, brutish, and short.
So instead, researchers usually do what are called “observational studies”: they take data sets that include both insured and uninsured people, and compare their health outcomes—usually mortality rates, because these are unequivocal and easy to measure. For a long time, two of the best studies were Sorlie et al. (1994), which used a large sample of census data from 1982 to 1985; and Franks, Clancy, and Gold (1993), which examined a smaller but richer data set from the National Health and Nutrition Examination Survey, and its follow-up studies, between 1971 and 1987. The Institute of Medicine used the math behind these two studies to produce a 2002 report on an increase in illness and death from lack of insurance; the Urban Institute, in turn, updated those numbers to produce the figure that became the gold standard during the debate over health-care reform.
The first thing one notices is that the original studies are a trifle elderly. Medicine has changed since 1987; presumably, so has the riskiness of going without health insurance. Moreover, the question of who had insurance is particularly dodgy: the studies counted as “uninsured” anyone who lacked insurance in the initial interview. But of course, not all of those people would have stayed uninsured—a separate study suggests that only about a third of those who reported being uninsured over a two-year time frame lacked coverage for the entire period. Most of the “uninsured” people probably got insurance relatively quickly, while some of the “insured” probably lost theirs. The effect of this churn could bias your results either way; the inability to control for it makes the statistics less accurate.
The bigger problem is that the uninsured generally have more health risks than the rest of the population. They are poorer, more likely to smoke, less educated, more likely to be unemployed, more likely to be obese, and so forth. All these things are known to increase your risk of dying, independent of your insurance status.
There are also factors we can’t analyze. It’s widely believed that health improves with social status, a quality that’s hard to measure. Risk-seekers are probably more likely to end up uninsured, and also to end up dying in a car crash—but their predilection for thrills will not end up in our statistics. People who are suspicious of doctors probably don’t pursue either generous health insurance or early treatment. Those who score low on measures of conscientiousness often have trouble keeping jobs with good health insurance—or following complicated treatment protocols. And so on.
The left is predictably fond of the study which got the largest number, 45,000 a year. Unfortunately, its authors are political advocates for a single-payer system, who also helped author the notorious studies on medical bankruptcies. Those studies are very shoddily done, with parameters that somehow always conspire to produce the maximum possible number. In the first study, they set an absurdly low threshhold for what constituted a “medical bankruptcy”. In the second, they chose 2006, the year after the 2005 bankruptcy reform act had driven an unprecedented spike in filings. It seems pretty likely that medical bankruptcies were bound to be overrepresented in 2006, since most financial events are easier to see coming than illnesses. But even if you disagree–and the authors offered an incredibly wan explanation of why they did–it’s very clear that the people who filed in 2006 were not going to be a representative sample of bankruptcies in a normal year. I can’t imagine why you would choose to study 2006 unless you were looking for biased results. I have to conclude that their political beliefs are affecting their work, which means I wouldn’t touch that 45,000 number with a bargepole–I wouldn’t cite anything they authored even if it offered to prove beyond a shadow of a doubt that I was right about everything.
The right, meanwhile, shuns the subject like the plague. It will not do anyone’s career any good to be attached to an argument that sounds like the health care equivalent of “let them eat cake”.
So allow me, maybe, to be the first. I’m afraid I’m not confident about any number. All of these studies suffer from unobserved variable bias, which is to say, the uninsured are not like the rest of us. (The long term uninsured, I mean; the short term uninsured are not a large problem for society). There are all sorts of reasons that people end up uninsured, but most of them are correlated with much poorer health outcomes, and only some of them end up recorded in our surveys.
To give you an example of what I mean, one of the two studies that went into the most commonly cited number–the roughly 20,000 a year figure from the Institute of Medicine and the Urban Institute–found that the highest mortality was not associated with being uninsured, but being on a government health care program. (the other excluded those patients). This was true even after they’d run all their controls. Given that the bulk of the coverage expansion in both the Senate and the House plans comes from Medicaid expansion, this is a little disturbing.
But how likely is it that Medicaid is killing people? Possible, I suppose, but not really all that likely. Medicaid and Medicare patients, too, are not like the broader population. The authors in fact recognized this fact in their paper, pointing out that these patients have higher rates of disability–but then failed to address the obvious question this raised about their data on the uninsured.
This problem plagues almost all of the studies on mortality and the uninsured. Probably the best one looked at patients who had been taken to the ER, which still showed higher mortality for the uninsured. But it’s not clear that this indicates that lacking insurance is dangerous; it may be telling us that people who lack insurance have a lot of factors that lead to poorer health outcomes.
And another McArdle post here
I agree with her conclusion:
Intuitively, I feel as if there should be some effect. But if the results are this messy, I would guess that the effect is not very big.
Matthew Yglesias tweets:
Do rightwingers really believe that US health insurance has no mortality-curbing impact?
Cowen responds to Yglesias:
I don’t speak for “right-wingers,” but I’ll say this:
1. I genuinely don’t know what to believe. And I often toy with the idea of an “innovation-maximizing” health care policy, so that future coverage is more effective.
2. I am commonly excoriated by people (not Matt) for not supporting government-subsidized universal health insurance, yet few if any of these people grapple seriously with the best evidence.
3. I live in a country where the extension of health insurance is a major issue, and a major budgetary issue, yet much of the discussion is in an evidence-free zone.
4. I don’t view it as incumbent on me to come up with the final answers in this debate or even a provisional stance. It’s incumbent on the people pushing coverage plans to make the case for what they are doing and so far they haven’t. I do recognize that medical bankruptcy is a separate set of issues and that greater coverage will significantly lower financial risk. That said, the appropriate response on the health issue is not to change the topic and start talking about bankruptcy.
Yglesias responds to Cowen:
A few points on the insurance status and mortality debate:
— Normally we require overwhelming empirical data to overturn a principle that has strong theoretical support.
— The empirical data to support the “insurance status doesn’t impact mortality” conclusion is not overwhelming.
— The Institute of Medicine estimates about 18,000 excess deaths annually due to lack of health insurance.
— A Harvard Medical School study finds 45,000 excess deaths.
— Numerous small-scope studies also support this conclusion:
— Studies of broader metrics of health show a robust link between insurance status and health.
— The outcomes of these studies appear to rely significantly on exactly how you specify the model in terms of control variables, which I think is again a reason to be skeptical of overturning the theoretically sound conclusion that insurance saves lives.
— Publicly subsidized health insurance should improve the financial well-being of its recipients, and the linkages between financial well-being and health also appear robust.
— I don’t believe that the people touting these studies really believe them; if widespread beliefs about the desirability of health insurance are totally wrong, this should have dramatic policy implications that should be explored.
— It’s true that there are major inefficiencies in the health care system, and one of the main goals of the health reform plan before congress is to reduce them.
Bottom-line: I’m happy to say that if I were dictator and were about to increase expenditures on public services by $80 billion per year, increasing the number of Americans with comprehensive health insurance would not be my first choice for use of the money but this isn’t the choice-set available. The real issue in the health reform debate, I think, is that many people think that keeping the tax share of GDP low is very important for economic growth. I think that, rather than anything about health care, is the issue that America usually debates in an evidence-free manner.
That uninsurance is bad for you is easy to defend if you know the research. There is a large body of health services and health economics literature that documents the negative effects on health due to lack of insurance. My own work with Steve Pizer and Lisa Iezzoni, published in Health Affairs, reviews some of that literature as it pertains to individuals with chronic health conditions.
Using data from the National Health Interview Survey, a recent report found that 46.0 million nonelderly U.S. adults (ages 18–64) reported having at least one of seven major chronic conditions in 1997; by 2006, that number had risen to 57.7 million. This and other studies document much lower access to care among uninsured people with chronic conditions compared with insured people. Adverse access markers include lower rates of having a usual source of care, fewer primary care and specialist visits, more frequent use of emergency departments (EDs) for primary care, and difficulties affording services. Such studies complement a growing body of research documenting poorer health outcomes among uninsured people with chronic conditions. [3-6] Acquiring health insurance can improve people’s health and change downward trajectories of functional declines.
(Bold mine.) Since health outcomes pertaining to the transition to Medicare is one focus of Megan McArdle’s Atlantic Monthly piece (see also her related blog post; h/t Tyler Cowen), let’s focus on that for a moment. In Health of Previously Uninsured Adults after Acquiring Medicare Coverage  McWilliams, et al. find that
eligibility for Medicare coverage at age 65 years was associated with significant improvements in self reported health trends for previously uninsured adults relative to previously insured adults. … our findings suggest long-term benefits of gaining insurance on the health of previously uninsured Medicare beneficiaries, particularly those with cardiovascular disease or diabetes.
(Again, bold mine.) The evidence that insurance and the access to care it facilitates improves health, particularly for vulnerable populations (due to age or chronic illness, or both) is as close to an incontrovertible truth as one can find in social science.
McArdle responds to Yglesias and Frakt:
Matthew Yglesias twitters “Do rightwingers really believe that US health insurance has no mortality-curbing impact?” Austin Frakt suggests that I am somehow in the grip of this ridiculous belief, and goes onto say that the state of knowledge is beyond the point where we need to understand the size of the effect.
I question the description of myself and Tyler Cowen as “rightwingers”–conservatives hate a good third of my positions at least.
But to answer the question anyway, I thought I’d made it clear, but apparently not: I think it is possible that the lack of insurance has no effect on aggregate mortality statistics. I do not think that this is likely, but I think it’s possible. What I think is likely is that the effect is not that large, because if it were large, it would be very surprising to see so little effect on the mortality of an elderly population with a high mortality rate, or to have a study that samples 600,000 people and finds no effect.
Mostly what I think is that the statistics are really, really flawed. Not because the authors are bad social scientists, but because this stuff is so hard to tease out. Natural experiments are rare, and data sets often hard to come by.
This is about how I feel about the minimum wage. My intuition is that demand curves slope downward, so if you raise the price of labor, employers are likely to consume less of it. But if you can get a study like Card and Krueger, than the effect simply can’t be that large–at least, within the range that the US usually plays with the minimum wage. I don’t think it’s particularly good public policy, because too much of it goes to middle class teenagers and the like, and even small disemployment effects are dangerous for vulnerable populations. But I don’t think it’s super-terrible public policy either.
I’m much more convinced by the benefits of health insurance for certain subpopulations, particularly people with diseases we’re very good at treating. HIV seems to pretty convincingly respond to offering public treatment–which also has a pretty compelling public health rationale. (I don’t want to hear anything about spears mounted on steering wheels, thank you very much). Medicaid expansions provide some pretty good natural experiments, IMHO, indicating that you can improve infant mortality. Poor people with hypertension get better blood pressure control pretty consistently.
Frakt responds to McArdle:
Not only are the statistics on mortality and it’s relationship to health insurance not flawed (and certainly not “really, really flawed”), but the connection between insurance and non-mortality health outcomes is extremely well established. I cannot fathom how it could be missed by anyone examining the literature. Measuring the effect of insurance on non-mortality health outcomes is not “even harder,” it is far easier. That’s why health services researchers and health economists do it all the time, and publish the results.
This is incredibly important. People really do suffer and die due to lack of insurance. The empirical evidence bears that out. Meanwhile, policymakers debate (and debate, and debate) what to do. McArdle advises a go slow and/or go small approach based on a misreading of the evidence. If there is one thing I would hope we could agree on it is that that’s a very poor basis for policy prescriptions. My recommendation: read the literature or a credible literature review before claiming to know what it says or what it implies we should do.
There are two basic problems with establishing a causal connection between insurance and mortality. The first is that you can’t run the truly random experiment you’d want to run. In that experiment, you’d pick a random sample of the population, divide it into two groups, and take one group’s insurance away. But if you can’t do that, then you’re stuck with all sorts of differences between people who have insurance and people who don’t have insurance. You can control for the obvious ones — smoking, weight, income, etc. — but there’s a lot that you’ll miss. This is the problem that drives McArdle’s essay.
What you’d want to do in this case is find what researchers call “natural experiments.” This is when the world does the work for you. One natural experiment, for instance, is that there are a lot of uninsured Americans who are 64 years old, but at 65, everyone gets Medicare. McArdle spends a lot of time on these studies, saying that the absence of an observable effect on death rates is “probably the single most solid piece of evidence” in favor of her position.
Then her position is pretty weak. Michael McWilliams is an assistant professor of health-care policy and of medicine at Harvard Medical School and an associate physician in the Division of General Medicine at Brigham and Women’s Hospital. The type of guy, in fact, who runs Medicare discontinuity studies. But he’s quick to explain the problem with gleaning mortality data from them. “It’s great study design,” he tells me. “But you’re looking for abrupt changes in health outcomes. For mortality for the general population, you don’t expect an immediate benefit.”
To put that slightly more simply, people don’t die very often. What discontinuity studies identify are abrupt effects, and you don’t expect abrupt death at age 65. One Medicare discontinuity study, however, checked in with a group at risk for abrupt death: the critically ill. The study, conducted by David Card, found “a nearly 1 percentage point drop in 7-day mortality for patients at age 65, implying that Medicare eligibility reduces the death rate of this severely ill patient group by 20 percent.”
But if the critically ill are the subpopulation that makes the most sense for a discontinuity study, they’re still a small subset of the whole. “Medicare study designs lend to very robust studies about disease control,” continues McWilliams. “They’re very good for measuring effects on blood pressure or self-reported health status. They don’t work as well for mortality. It’s misleading to say that just because those studies don’t show those mortality benefits, that means they weren’t there. The study wasn’t suited to measuring that outcome.”
But as McWilliams says, if these studies aren’t very good at measuring the effect insurance has on whether you die, they’re very good at measuring the effect insurance has on things that kill you.
“Mortality is particularly hard to discern statistically,” says Katherine Baicker, a Harvard health economist and a former member of George W. Bush’s Council of Economic Advisers, “because, fortunately, mortality is a less common health outcome than lots of intermediate steps. But you can look at things like heart disease, which is more prevalent.”
There are a lot of those studies. They appear in the Urban Institute’s analysis, though McArdle doesn’t mention them. McWilliams, in fact, recently conducted a large review of this literature, including the newer class of natural experiment studies testing the impact of insurance on conditions like heart disease, cancer and HIV. “Based on the evidence to date,” his study concluded, “the health consequences of uninsurance are real, vary in a clinically consistent manners, [and] strengthen the argument for universal coverage in the United States.”
What we’re left with is three classes of evidence. The first are the major observational studies attempting to model something difficult to model, which is the causal effect of insurance on mortality. They do their best to control for the confounding factors and find an effect anywhere from 18,000 and 45,000 unnecessary deaths a year. The second are the natural experiment studies. The only one that measures people who are actually facing death in the near-term — and these studies are only really useful in the near-term — finds a 20 percent reduction in death rates. Then there are the many, many studies assessing the effect of insurance on conditions that kill you, like high blood pressure and cancer. And they show a large protective effect from insurance.
Michael F. Cannon at Cato:
I don’t know anyone who thinks health insurance has zero effect on mortality overall. Yet it is entirely possible for the average effect to be positive and the marginal effect to be zero. One reason may be that the uninsured do benefit from the human and physical capital that health insurance makes possible. It may also be the case that when the uninsured do obtain health insurance, the additional medical care they receive is more likely to harm them than to help them. The researchers behind the RAND Health Insurance Experiment make essentially the same point.
If the marginal effect of health insurance on health is zero, it raises other interesting questions. Would it also have zero effect on health outcomes if we were to reduce the number of people with health insurance? What is the size of the margin over which health insurance has zero impact? (Robin Hanson suggests it may be very, very large.)
Sonny Bunch at The Weekly Standard
Sonny Bunch at Doublethink:
We live in a country where the Washington Post allows their bloggers to accuse United States Senators of aiding and abetting in the deaths of hundreds of thousands of his fellow citizens for what amounts to hurt feelings, yet we have no real idea whether or not those hundreds of thousands of people would have been better off with or without health care. The most damning study — the one that claimed some 45,000 people die each year from lack of insurance — was authored by hardcore single payer activists who McArdle previously busted for their shoddy work studying health-related bankruptcies. A more reasonable number is 20,000, but then there’s a study authored by a member of the Clinton White House that shows coverage has no net effect on mortality.
The point is this: we have no idea, really, what sort of effect health insurance has on mortality. Let’s say that granting universal coverage would save, on net, 5,000 lives a year. Would it then be worth embarking on a multi-trillion dollar new entitlement that ruins the American health care system and stifles innovation in the medical community? I don’t think it would. But you bump that number up to something much higher like 45,000, and maybe the equation changes.
UPDATE: More McArdle
McArdle responds to Klein
UPDATE #2: Will Wilkinson