Measuring Coverage Rates in a Pandemic

My coauthors (Matt Brault and Ben Sommers) and I published a piece in JAMA Health Forum yesterday laying out the challenges inherent to evaluating how the COVID-19 pandemic has affected health coverage. We’ve also put together a table of surveys used to study coverage—including national, nongovernmental surveys fielded by private organizations in response to the pandemic—as a general resource.

The usual-suspect national surveys (ACS, CPS, NHIS) won’t make 2020 data available for months yet. The Census (in coordination with BLS and CDC) did the herculean task of standing up and fielding a rapid-response survey starting in April, but the first wave of the survey had a worryingly low response rate and it’s hard to assess the reliability of its estimates without a pre-pandemic baseline. Private organizations have stepped up to field fast national surveys, but those necessarily obscure important regional variation. Local surveys have sprung up in some places—SHADAC has a great resource of available surveys—but they can’t be directly compared. The pandemic itself has affected survey administration (government surveys were paused in the early spring) and response rates.

Most worryingly, the gold-standard government surveys use coverage metrics that make it hard to understand when in the year someone lost coverage, which will hamper future research. In ordinary times, coverage rates averaged across the year may not be particularly bothersome. But when we’re talking about evaluating a pandemic-recession that has ebbed and flowed across the country, averages  fall short of meeting research needs.

Fortunately, better data is possible:

Currently, the CPS Annual Social and Economic Supplement public files report coverage at the time of interview and ever having had coverage in the prior year. The survey collects more granular data—not included in public files—about coverage in each month of the year prior to the interview. The ACS measures coverage at the time of interview and surveys respondents throughout the year but does not disclose the month of interview in its public data release. But a data set that provides a blended average of coverage rates from January, May, and December 2020 obscures the most critical effects of the COVID-19 pandemic. If the Census Bureau released the CPS longitudinal file and the ACS month of interview (or even quarter of interview), with appropriate confidentiality protections in place, this would immeasurably improve researchers’ ability to identify coverage changes before, during, and after the pandemic.

There is precedent for this kind of change in public use files under extraordinary circumstances: for the 2005 ACS, the Census made flags available so that researchers could distinguish between people interviewed before and after Hurricane Katrina in the affected Gulf region. It’s possible that the Census is already considering this kind of accommodation, but we thought that this issue—and available solution—should be on the radar of the broader health services research community.

You can read the full piece here.

Adrianna (@onceuponA)

Hidden information below

Subscribe

Email Address*