A Change in Our Poll: We’re Keeping Respondents Who Drop Off the Call

A Change in Our Poll: We’re Keeping Respondents Who Drop Off the Call

  • Post category:USA

On Saturday, we’re releasing the results of the latest New York Times/Siena College national poll, including what voters think about the candidates, the election, and how voters feel about the state of the country.

This time, we’re making a modest methodological change that we wanted to tell you about in advance: We’re keeping respondents who started our survey but then “dropped off” before the end of the interview.

It’s a bit wonky (6/10, I’d say), but hopefully helpful for those following our polls closely. It does move our results, albeit by only a percentage point.

Here’s the basic problem: The interviews for our national surveys are conducted by phone (mostly cellphones), and they take about 15 minutes to complete. About 15 percent of the respondents who tell us how they’ll vote in a coming election decide to stop taking the survey — politely or not — before answering all our questions.

We’ve been calling these respondents “drop-offs.”

Careful readers of this newsletter know we’ve been interested in “drop-off” respondents since our Wisconsin experiment in 2022. The “drop-offs” are less likely to vote, less likely to have a college degree, younger and more diverse.

These are exactly the kind of respondents whom pollsters already struggle to get to take polls, making it all the more frustrating that we lose a disproportionate number of them while a survey is underway.

Even if there’s no effect on the result, losing these respondents reduces our response rate, drives up costs and increases the need for “weighting” — a statistical technique to give more weight to respondents from groups who would otherwise be underrepresented. At worst, the “drop-offs” may have different political views than the demographically similar respondents who finish the interviews, biasing our survey toward the most interested poll respondents.

Over the last eight Times/Siena polls, we’ve been evaluating the effect of losing these voters and experimenting with how we can retain them. The only visible indication of this experimentation is that we’ve been asking about age and education up high in our surveys — questions that have allowed us, behind the scenes, to more fully evaluate how these respondents differ.

Despite their demographics, the drop-off respondents are likelier to back Donald J. Trump than those who complete the survey. Across the last eight Times/Siena surveys, Mr. Trump had a nine-point lead against President Biden among drop-off voters, compared with a three-point lead among those who completed the survey. Notably, this Trump edge survives or even grows after controlling for the demographic characteristics we use for weighting, like race and education. As a result, the average Times/Siena result among registered voters would have shifted from Trump +3 over the last eight surveys to Trump +4.

This one-point shift is not consistent in every poll. But it’s true of our last Times/Siena poll in December, which showed Mr. Trump up by two points among registered voters and would have shown him ahead by three points had we retained the drop-offs.

It’s also true of the Times/Siena poll we’re going to release Saturday morning, which would be one point better for Mr. Biden without the drop-off respondents.

It’s not a common practice to keep the drop-offs. I think almost everyone would agree that these respondents are worth trying to include in a survey, but there are serious practical challenges to doing so.

The difficulty swirls around how to handle all those questions toward the end of the survey that weren’t answered by a large chunk of respondents.

This creates two specific problems.

One is weighting: A drop-off respondent doesn’t get to the demographic questions we use to ensure a representative sample. The solution here is relatively straightforward: Ask the key demographic questions toward the beginning of the survey, and count anyone who makes it past those questions as a “completed” interview.

Second and more challenging is how to report the results of the later questions on a survey.

Imagine, for a moment, that the final question of a poll is whether the respondents are liberal, moderate or conservative, and the respondents say they’re 25 percent liberal, 35 percent conservative and 40 percent moderate. Imagine that 15 percent of the initial respondents have dropped off by this point in the survey as well.

If we retain the drop-off respondents and do nothing else, the industry standard is to report a result like 21-30-34 with 15 percent unknown drop-offs, rather than 25-35-40. That would be frustrating for many questions. It could even lead readers to complain we have too few liberals or conservatives, if they don’t do the math to extrapolate the number we might have had without the drop-offs.

Worse, the respondents answering by the end of the survey will not be representative of the full population. After all, the drop-offs are disproportionately nonwhite, young and less educated. That means that the 85 percent of respondents answering at the end will be disproportionately white, old and highly educated.

Oddly enough, retaining the drop-off voters will generally wind up biasing the survey results against the drop-offs in questions toward the end of the survey.

For the first half of the survey, we will report the results from the full set of 980 respondents who responded to the questions used for weighting, including the 157 respondents who dropped off later in the survey. They will be weighted in the same manner as an ordinary Times/Siena poll.

For questions asked after the demographic questions used for weighting, we will report the results from the 823 respondents who completed the entire questionnaire. This is the group of people who would have been the full Times/Siena poll result in the past. They will be weighted separately in the same manner as an ordinary Times/Siena poll, with one twist: They will also be weighted to match the general election results from the full sample, including drop-offs.

You may notice the most obvious change: There are 157 fewer respondents to the second half of the survey than the first half. But there’s more to it: The demographic makeup of the 823 respondents will be ever so slightly different from the full sample, since even weighting doesn’t force a perfect alignment between the characteristics of a poll and the intended population. Hopefully readers find this tolerable; if not, there may be other options we can adopt in the future. This is, after all, the first time we’re trying this. I expect we’ll gradually get better at figuring out how to present these results, especially once we see what other people notice.

So if you find yourself dissatisfied when you look at our poll results tomorrow, let us know!

by NYTimes