The National Health Interview Survey also provides quarterly estimates for both insurance status and access measures, making it possible to account for the timing of the policy’s implementation in September 2010. Quarterly data also allowed us to distinguish early effects of the policy from effects several months later.We used the survey’s final data files for 2005-10 and earlyrelease data for the first three quarters of 2011.
Our second data source was the Annual Social and Economic Supplement to the Census Bureau’s Current Population Survey, a nationally representative survey of the US civilian, noninstitutionalized population. We used the 2006-11 data sets, covering calendar years2005- 10. This survey has a substantially larger sample than the National Health Interview Survey, providing us with greater power to detect differential effects of the policy among subgroups.
However, the Census Bureau’s survey lacks information on access to care and does not allow for quarterly coverage estimates. Thus, it is difficult with the Current Population Survey to precisely identify the “pre” and “post” periods or to test whether the effect of the policy strengthened over time.We treated data from the 2011 survey as being from the postimplementation period, although it contains some preimplementation data and captures policy effects only through December 2010. For these reasons, we expected the National Health Interview Survey to capture a larger effect of the provision than the Census Bureau survey does.
Together, these two data sets have unique features that provide a more complete picture of the effects of the dependent coverage provision. Looking ahead to the Affordable Care Act’s major insurance expansions of 2014, it is critical for researchers and policy makers to understand whether different national surveys
are likely to produce different estimates of policy effects. The dependent coverage provision presents a useful case study for comparing these data sets.
Analysis Our analytical approach was a difference- in-differences linear regression. This approach compared outcomes before and after the policy’s implementation for the treatment group (those ages 19-25) and a control group (those ages 26-34), to measure the impact of the dependent coverage provision on coverage and access to care.15 Because people ages 26-34 faced roughly similar conditions in the workforce and in the health insurance market as those ages 19-25 (other than under the provisions of the new law that allowed them to remain on their parents’ health plans), we believe they represented a plausible control group. Our analysis produced similar results with alternative control groups (people ages 26-30 and ages 27-29).
We used linear regression to compare the change in coverage among all people ages 19-25 before and after the policy went into effect, versus the coverage change in the control group. We assumed for simplicity that the provision was in effect for the entire fourth quarter of 2010 but for none of the third quarter, thereby lagging the provision one week after its implementation on September 23.
We included linear and quadratic time trend variables to adjust for preexisting coverage trends unrelated to the law.We adjusted for race or ethnicity, sex, education, marital status, employment status, and region, although this adjustment had little effect on our results.
The primary outcome for our coverage analyses was whether a person reported having “any insurance.”We conducted additional analyses of private coverage and public coverage separately.
We then examined the following three measures of access to care for adults in our sample: whether they said they had a usual source of care other than an emergency department, whether they had delayed care because of cost in the prior year, and whether they had not received needed care in the prior year. Information on usual source of care is available for only one adult per household in the National Health Interview Survey, which means that our sample size and ability to detect changes in this measure were smaller than for the other measures.
Our base analysis estimated the policy’s average effect on coverage and access throughout the period after it was implemented, beginning with the fourth quarter of 2010. However, the policy’s full impact probably did not occur immediately.
Plans were required to offer dependent coverage to young adults on renewal after September 23, 2010. Since coverage is often extended on a calendar year basis, it is likely that many families and insurers did not renew policies until January 2011, or perhaps even later. Because coverage and access gains probably increased over time, we estimated models in which we traced the timing of the effect of the policy by each quarter, instead of averaging all of the quarters together for an overall annual increase in coverage.
We also assessed the policy’s impact on different subgroups.We tested for these effects separately using the National Health Interview Survey and the Current Population Survey, since the former data set offers more precise timing and more recent data, while the latter data set offers larger sample sizes and additional variables. We measured changes in “any insurance” with our sample stratified by sex, marital status, race or ethnicity, employment status, respondent- reported health status, and full-time student status (available in the Census Bureau data only).We then tested for subgroup differences in the policy’s impact on coverage and access to care.
Our sample from the National Health Interview Survey contained 116,536 respondents, after we dropped 1,605 observations (1.3 percent) that were missing information on insurance status and 5,336 (4.3 percent) that were missing information on control variables. The analysis of usual source of care had 47,372 observations for sample adults, after we dropped 2,065 (4.2 percent) that had missing values.
Our sample from the Census Bureau data included 247,370 subjects. All analyses used weighting to produce national estimates and standard errors that accounted for the complex survey design .
Limitations Each of our two data sources has distinct advantages, as well as limitations. As noted, the National Health Interview Survey is ideal for analyzing the timing of the policy’s impact. The main limitation of this survey is its relatively small sample size, which reduced our power to detect differences among subgroups.
With its larger sample size, the Census Bureau survey is better suited for subgroup analyses. However, this survey is limited by the imprecision of the timing of insurance coverage data. The survey is conducted in March of each year and asks respondents to report all forms of coverage over the prior calendar year. The last date of coverage that should be captured in the 2011 data set is December 31, 2010, although some individuals may mistakenly respond with statements about their current coverage.16 As a result, our analysis of Census Bureau data might capture some effect through March 2011.
In addition, our strategy relied on the assumption that people ages 26-34 are a good control group for those ages 19-25. Several factors support the assumption that, in the absence of the policy, coverage would have trended similarly for the two groups. For the period just before the policy went into effect, we found no significant difference between the coverage trends for the two groups. Although other provisions of the Affordable Care Act did go into effect at the same time-namely, the creation of new insurance pools to cover people with preexisting conditions-enrollment in these pools was modest (21,000 people of all ages, by April 2011).17
Ideally, we would like to understand how the effect of the insurance expansion varied by socioeconomic status. However, assessing socioeconomic status is challenging for young adults.