You Ask, We Answer: How The Times/Siena Poll Is Conducted

You Ask, We Answer: How The Times/Siena Poll Is Conducted

  • Post category:USA

Polls can give us important insight into how people’s views on the issues, the state of the country and the candidates may affect how they vote. Polls have helped us understand how motivating issues like abortion and immigration are for Americans, how the war in Gaza is perceived by Democrats and Republicans, and how many voters believe misinformation.

A poll taken today is a snapshot of how voters feel. But we expect that things will change between now and Election Day as priorities shift, candidates make their cases and the decision feels more real for voters.

Even so, a poll done at this stage can help us understand how voters are assessing the candidates and the issues at play. When we ask the same questions over time, we may be able to see how candidates’ actions and behaviors are affecting voters’ views.

The New York Times/Siena College Poll is conducted by phone using live interviewers at call centers based in Florida, New York, South Carolina, Texas and Virginia. Respondents are randomly selected from a national list of registered voters, and we call voters both on landlines and cellphones. In recent Times/Siena polls, more than 90 percent of voters were reached by cellphone.

One of the most common questions we get is how many people answer calls from pollsters these days. Often, it takes many attempts to reach some individuals. In the end, fewer than 2 percent of the people our callers try to reach will respond. We try to keep our calls short — less than 15 minutes — because the longer the interview, the fewer people stay on the phone.

Phone polls used to be considered the gold standard in survey research. Now, they’re one of many acceptable ways to reach voters, along with methods like online panels and text messages. The advantages of telephone surveys have dwindled over time, as declining response rates increased the costs and probably undermined the representativeness of phone polls. At some point, telephone polling might cease to be viable altogether.

But telephone surveys remain a good way to conduct a political survey. They’re still the only way to quickly reach a random selection of voters, as there’s no national list of email addresses, and postal mail takes a long time. Other options — like recruiting panelists by mail to take a survey in advance — come with their own challenges, like the risk that only the most politically interested voters will stick around for a poll in the future.

In recent elections, telephone polls — including The Times/Siena Poll — have continued to fare well, in part because voter registration files offer an excellent way to ensure a proper balance between Democrats and Republicans. And perhaps surprisingly, a Times/Siena poll in Wisconsin had similar findings to a mail survey we commissioned that paid voters up to $25 to take a poll and obtained a response rate of nearly 30 percent.

Our best tool for ensuring a representative sample is the voter file — the list of registered voters that we use to conduct our survey.

This is a lot more than a list of phone numbers. It’s a data set containing a wealth of information on 200 million Americans, including their demographic information, whether they voted in recent elections, where they live and their party registration. We use this information at every stage of the survey to try to ensure we have the right number of Democrats and Republicans, young people and old people, or even the right number of people with expensive homes.

On the front end, we try to make sure that we complete interviews with a representative sample of Americans. We call more people who seem unlikely to respond, like those who don’t vote in every election. We make sure that we complete the right number of interviews by race, party and region, so that every Times/Siena poll reaches, for instance, the correct share of white Democrats from the Western United States.

Once the survey is complete, we compare our respondents to the voter file, and use a process known as weighting to ensure that the sample reflects the broader voting population. In practice, this usually means we give more weight to respondents from groups who are relatively unlikely to take a survey, like those who didn’t graduate from college.

You can see more information about the characteristics of the voters we reached and how much each group was adjusted in the weighting step at the bottom of our poll cross-tabs, under “Composition of the Sample.”

In 2022, we did an experiment to try to measure the effect nonresponse has on our phone polls. In our experiment, we sent a mail survey to voters in Wisconsin and offered to pay them up to $25 to respond. Nearly 30 percent of households took us up on the offer, a significant improvement over the 2 percent or so who typically respond by phone.

What we found was that, overall, the people who answered the mail survey were not all that dissimilar from the people we regularly reach on the phone, on matters including whom they said they would vote for. However, there were differences: The respondents we reached by mail were less likely to follow what’s going on in government and politics; more likely to have “No Trespassing” signs; and more likely to identify as politically moderate, among other things.

But the truth is that there’s no way to be absolutely sure that the people who respond to surveys are like demographically similar voters who don’t respond. It’s always possible that there’s some hidden variable, some extra dimension of nonresponse that we haven’t considered.

The core concept underlying survey research is the idea of sampling: You don’t need to talk to everyone in order to get a good idea of the whole population; you just need a sample.

You may not know it, but sampling is something that you probably use in your everyday life. You don’t need to eat a whole pot of soup to know what the soup tastes like; you only need a spoonful.

Of course, sampling only works if the subset you taste is representative of the whole. Pollsters usually attempt to obtain a representative sample through random sampling, where everyone has an equal chance of selection.

If you had a truly random sample of Americans, then in theory merely a few hundred people would be enough to measure public opinion with reasonable accuracy — much as you would probably realize that a coin flip is a 50-50 proposition after a few hundred tries.

One interesting aspect about sampling is that a survey of 10,000 people is not 10 times better than a survey of 1,000, and that larger poll could be less accurate if the people surveyed are not representative.

A survey of 1,000 voters has a margin of sampling error of around three to four percentage points. In practice, that means if the poll shows that 57 percent of voters approve of something, the real figure could be closer to 54 percent or 60 percent.

If we doubled the number of people we polled — or better yet, tripled it — the margin of error would go down only slightly, to maybe plus or minus two percentage points. Which is to say, the overall accuracy of the survey would not improve very much.

However, if the number of respondents decreases too much, the margin of error can increase drastically. That’s important to understand when looking at results among demographic subgroups that are smaller in size.

In the 2022 midterm elections, Times/Siena poll results were, on average, within two points of the actual result across the races we surveyed in the last weeks of the race. That helped make The Times/Siena Poll the most accurate political pollster in the country, according to the website FiveThirtyEight.

At the same time, all polls face real-world limitations. For starters, polling is a blunt instrument, and as the margin of error suggests, numbers could be a few points higher or a few points lower than what we report. In tight elections, a difference of two percentage points can feel huge. But on most issues, that much of a difference isn’t as consequential.

Historically, national polling error in a given election is around two to four percentage points. In 2020, on average, polls missed the final result by 4.5 percentage points, and in some states the final polls were off by more than that. In 2016, national polls were about two percentage points off from the final popular vote.

When we are getting ready to field a poll, we think about what is happening in the world that might be changing people’s attitudes and what might happen soon that might cause public opinion to change.

Sometimes we conduct a poll to measure the impact of a specific event, like a presidential debate. And sometimes we conduct a poll because we want to check in on how Americans are thinking about a particular issue, like the economy. One of the goals of our April poll was to set a benchmark for how voters were feeling before the start of former President Donald J. Trump’s criminal trial in Manhattan.

Once we have topics nailed down for a poll, we spend a tremendous amount of time debating and crafting the questions. Our goal with any single question is to feel that every respondent — across the political spectrum — feels their viewpoint is accurately represented as a response option. We want to make sure everyone sees the question as fair.

We also want to make sure that the question is understood by everyone to mean the same thing. And finally, it is important that we are measuring real views that people have, not putting ideas into their heads or pushing them in a particular direction. Crafting accurate survey questions is an art that we take very seriously.

The Times/Siena Poll is produced by Camille Baker, Nate Cohn, William P. Davis, Ruth Igielnik, Christine Zhang and the team at the Siena College Research Institute.

by NYTimes