Gonzales

Please Don’t Call It a Push Poll: Transgender Edition

Survey either measures public opinion or it doesn’t

Virginia state House candidate Danica Roem complained about a “push poll” run against her last month. (Courtesy Danica Roem for Delegate/Facebook)

Danica Roem and Bob Marshall are facing off in an unusually high-profile race for the Virginia House of Delegates — Roem, a transgender Democrat, is challenging Marshall, a conservative Republican. The race reached a new level in the final weeks when allegations of a so-called push poll came to light.

My longtime colleague Stuart Rothenberg jokes that there are some columns that need to be written over and over again. The debate over push polls is one of those topics.

Last month, Roem took to Facebook to report an “anti-transgender robo push poll” and to Twitter, where she called it an “anti-trans robocall.” The Washington Blade picked up the story and called them “anti-transgender robocalls” in a write-up of the incident.

Roem and the Democrats didn’t like that people contacted were read a handful of the candidate’s alleged policy positions and asked if it affected their vote. The incident seemed even more provocative when scratchy audio of a call recording was posted online, making it seem even more nefarious.

A subsequent story in The Washington Post used the term “robo-calls” in the lede but described the calls as part of a telephone poll later in the piece.

Robocalls and polls are two very different things, and I’d argue the term push poll should be banned from usage. It’s either a poll or it’s not.

The American Association for Public Opinion Research defines a push poll as “a form of negative campaigning that is disguised as a political poll. ‘Push polls’ are actually political telemarketing — telephone calls disguised as research that aim to persuade large numbers of voters and affect election outcomes, rather than measure opinions.”

The AAPOR also links to Stu’s 2007 piece on push-polling, which is a must-read for any reporter or political operative who hasn’t read it yet. In fact, go read it now and come back when you’re done. 

In the Virginia case, the controversy was created by a poll of 341 likely voters in the 13th District, conducted October 9-11 by Revily Inc. for the conservative group, American Principles Project. The group didn’t hide the fact that it was behind the poll, considering it included a list of message questions and results in an Oct. 16 press release. It subsequently released the entire survey to me and others.

The first question asked respondents about the likelihood they would vote on Nov. 7. The second question asked about President Donald Trump’s job approval rating (41 percent approved, 46 percent disapproved). The third question asked about Democratic Gov. Terry McAuliffe’s job rating (44 percent approved, 42 percent disapproved). And the fourth question asked about a hypothetical general election matchup between the two candidates.  (Marshall was up narrowly over Roem, 46 percent to 44 percent.)

Questions 5-10 were about policy positions that the American Principles Project connected to Roem, followed by, “Does this make you more or less likely to support Danica for the Virginia House of Representatives?” This is the point of contention.

Stu explained this well previously:

Serious polls can include push questions that contain some explosive or even incorrect information, but that doesn’t make them advocacy calls. Testing possible messages is a legitimate survey research function, and as long as the question is asked of a small sample and seeks to get a response to know whether the issue is useful in an election, it really doesn’t matter how negative the message is. Push questions are not the same thing as push polls. Push questions, which are included in a survey of only 500 to 1,000 respondents, are a legitimate part of a public opinion poll that seeks to test effective messages.
If the purpose of the American Principles Project’s calls was to simply spread negative information, calling a random sample of 341 voters when the electorate could vary from 10,000 to over 35,000 people (based on turnout in Marshall’s last five re-election races) would not be the most efficient way to do it. And the fact that multiple Roem supporters apparently received a call (as the candidate said) adds legitimacy to the poll. If the goal was to simply spread information to incite conservative voters, then the group likely wouldn’t have bothered calling Democrats.

Now, maybe you didn’t like the fact that the survey only included negative information about Roem. That’s similar to last year when Bernie Sanders’ supporters complained about a poll conducted before the Nevada caucuses tested negative messages against the senator from Vermont without testing any negatives against Hillary Clinton. And similar to Virginia, availability of the audio made it more provocative.

I talked to veteran Democratic pollster Mark Mellman for my piece on that Sanders poll last year. He reminded me, push polls were officially defined and condemned by the American Association of Political Consultants in December 1995, the AAPOR and the National Council on Public Polls in 1996 and the Center for Marketing and Opinion Research (1996), the Roper Center for Public Opinion Research (2001) and by a letter from 35 Republican and Democratic pollsters in 1995 to the AAPC. 

“The profession has long had a consensus on what a push poll is and isn’t,” Mellman told me last year. “If everyone has their own personal definition of words and phrases, it’s very difficult to communicate.”

Sanders’ pollster refused to accept a bipartisan definition of the term that goes back more than 20 years because of the disparity of negative information.

Other pollsters who didn’t have a horse in the Sanders versus Clinton race agreed that pollsters talk about “push questions” in their scientific surveys, but don’t use the term push poll to refer to their own surveys with push questions in them. They also said equity of negative and positive messages between both candidates is not only subjective and not always necessary, it is sometimes not even recommended because more messages limit the use of responses to later questions because respondents are biased with previous information.

Why does this matter? As Stu wrote a decade ago, because referring to advocacy calls as push polls adds to public cynicism and, more importantly, discredits a legitimate survey research approach.

We’ll know soon enough whether Roem will knock off Marshall. The incumbent had a close race in 2013 (winning 51 percent to 49 percent), but won more easily in 2015 (56 percent to 44 percent) as well as previously in 2011 (60 percent to 40 percent), 2009 (61 percent to 39 percent) and 2007 (58 percent to 52 percent).

You may not like the type of questions the American Principles Project poll asked about Roem, and that’s your right. But please don’t call it a push poll.

Get breaking news alerts and more from Roll Call on your iPhone or your Android.