Your guide to the chaotic 2024 polling year ahead

So here, on the eve of the election year, is your guide to the chaotic polling year ahead: a list of things every knowledgeable and interested political observer should look for. These will help you know which 2024 polls to pay attention to, which to ignore, and to which you should apply a healthy measure of skepticism. And readers (and social media posters, especially) should also know what the poll results actually mean, and what they don’t.

Some core principles remain the same, like paying attention to who commissioned a poll and its margin of error. But being an informed reader of the polls requires even more now. That includes taking note — and demanding greater disclosure about — how poll respondents were interviewed and selected to participate in the first place. And in our polarized country in which hyper-consequential elections come down to narrow margins, what does it really mean for one candidate to be “leading?”

It’s not even 2024, but the debates over the polling between Trump and President Joe Biden
have already
started. So here’s what to know to read the polls — and what you deserve to know from pollsters:

Pollsters changing how they interview people

Just four years ago, almost all of the 2020 election polls were conducted through either telephone calls or online interviews.

Now, pollsters aren’t just embracing new methodologies — they’re
mixing methods
within individual polls to cobble together representative samples.

While some polls still exclusively use one methodology, many combine phone interviews with web-based approaches, whether respondents are contacted by text, email, are existing members of an internet panel or complete the survey after clicking on an ad on another website.

CNN’s most recent poll
was conducted via a mix of phone calls and online interviews with respondents selected by mail (more on this below).
The Wall Street Journal’s most recent poll
combined phone interviews with online responses among voters reached by text message.

As Americans’ communication habits have changed, there’s not necessarily a gold standard for polling anymore. Each methodology has its advantages and risks — many of which won’t be known until after the 2024 election.

That’s why it’s increasingly important to know how people were interviewed. Readers of public polls should demand — and take note of — the poll’s “mode” or method of interview.

New ways of reaching people

The sweeping methodological changes in polling include how people were chosen to participate in the first place.

Two decades ago, virtually all public polling was conducted by randomly dialing telephone numbers — using the area code and exchange to pinpoint geographies — to achieve a representative sample. But that’s when nearly every American lived in a home with a landline phone.

Now, polls conducted that way represent a distinct minority. Instead, many pollsters use voter files, borrowing from the toolbox of internal campaign pollsters who’ve long sought to target people they know are registered to vote.

That can still come in the form of telephone surveys, but some pollsters are using other methods. CNN finds some of its state poll respondents by mailing solicitations to registered voters at the home addresses listed in the voter file. They’re then invited to complete the survey online.

Most other internet polls use existing panels of people (not just registered voters) who’ve signed up to complete surveys. Some panels are assembled randomly — what’s known as probability sampling — like the CNN poll. Others, including the POLITICO|Morning Consult poll, use “opt-in” panels of users who’ve already volunteered to complete surveys instead of being randomly recruited.

So which method is best for elections? When it comes to phone polling, most pollsters consider surveys conducted from the voter file to be better than those which call randomly generated phone numbers. There’s a lot of information — gender, turnout history, race and party registration in some states — that can be gleaned from voter files.

As for internet polls, a
Pew Research Center study
this year found those built on probability samples were more accurate on most measures than those from opt-in panels. But the one measure on which the probability polls were worse? Turnout in the 2020 election, suggesting the advantages of probability online polls don’t necessarily extend to election polling.

Fixing what went wrong in 2020 and 2016

If trying multiple ways to reach swaths of the electorate isn’t enough, pollsters have another, more blunt trick up their sleeves: asking people for whom they voted in the last presidential election.

Weighting a poll to match the 2020 election results
is increasingly common
among pollsters, especially as a way to account for the underestimation of Trump and the GOP in the past two presidential races. Pollsters have found that traditional measures of party identification may not be sufficient — for example, Republicans who respond to polls are generally less supportive of Trump than those who don’t.

Most pollsters find it helps, but it’s not a panacea. The New York Times’ polls with Siena College in 2022 would’ve been less accurate if they weighted the results to respondents’ recall of their 2020 presidential vote,
the paper has written

Another reason why you should pay attention to whether pollsters are weighting on recalled past vote: Those who do see less volatility, with the practice smoothing out some of the jumps from survey to survey.

What is a “lead?”

The tight margins by which presidential elections have been decided in recent years makes it even harder to read election polls.

That’s why it’s important to consider the margin of error — and whether one candidate has a meaningful lead over the other. If you see a poll showing President Joe Biden leading Trump by 2 points — like this week’s New York Times/Siena College poll did — it’s not statistically significant.

I have a pretty simple shorthand: If the margin between the candidates is less than the poll’s margin of error, there is no clear leader. You could call the race a dead heat, or a virtual tie.

The key here is that the margin of error applies to both candidates’ vote shares. The margin of error for the New York Times/Siena poll was plus or minus 3.7 percentage points — that means a 2-point lead is well within the margin of error.

If the margin between the candidates is between one and two times the margin of error, you can consider the leading candidate to have a “slight” advantage. Yes, it’s possible that candidate isn’t necessarily ahead, since the margin of error applies to both figures. But it’s just as possible that they have a larger lead than the poll indicates.

If the margin between the two candidates is double the margin of error or greater, the leading candidate can be described as significantly ahead.

Here’s a real-world example from this week: In our
POLITICO|Morning Consult poll of likely voters in California’s March 5 primary
— which had a margin of error of plus or minus 3 percentage points — Democratic Rep. Adam Schiff had a 9-point lead over his closest rival in the state’s Senate race, Republican Steve Garvey, 28 percent to 19 percent.

That meant Schiff had a clear lead. But in the California primary, the top two candidates advance to the general election, regardless of party. And even though Garvey was technically in second place, Democratic Rep. Katie Porter was just 2 points behind him, meaning the two are best considered (and described as) neck and neck.

What else you need to know

Just because there’s new information to consider in the changing polling landscape doesn’t mean the old rules don’t apply.

Pay attention to who sponsored the poll: Is it a political candidate, party committee, partisan media outlet or other outside group that might be using the results to advance an agenda? You don’t need to throw the poll in the trash necessarily — the firm that conducted the poll still has a reputation to protect. But consider the results with a grain of salt.

When was the poll conducted? Was it before or immediately after a major news event that might influence the results? Was it only in one day, which tends to mean only the easiest-to-reach voters would have responded?

Does the result look very different from other polls? Outliers happen — as a matter of statistical principle, 5 percent of polls will be inaccurate outside the margin of error.

A divergent result doesn’t automatically mean it’s wrong: Something about the election may have shifted since the other polls were conducted. But it’s usually a good idea to wait for more evidence, one way or the other.

And don’t forget the most important rule: patience. In addition to the specific advice above, it’s a virtue that will serve us all well reading the polls in 2024. It’s easy — but rarely prudent — to jump to broad conclusions based on the result of one poll.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *