The Debate Over 'Partisan Polling' (Robert Coon On Politics)

by Robert Coon  on Wednesday, Aug. 20, 2014 3:51 pm  

Robert Coon

(Editor's Note: This is an opinion column.)

With highly targeted and fiercely competitive races for U.S. Senate and governor, Arkansas has been a hotbed for public opinion polls in recent months, with a new poll being released just about weekly. For political junkies, campaign operatives and media outlets, fresh poll numbers feed the political beast and temporarily fill that insatiable desire to know who’s winning.

Additionally, as the public’s interest in polling results has increased, their effect on campaign momentum [1] and subsequently fundraising has grown significantly. Campaigns regularly cite polling numbers [2] in fundraising pleas either to generate enthusiasm (and donations) if they’re in the lead or, if they’re down, to sound the alarm for urgent financial help. Worth noting here that the firm at which I am a partner, Impact Management Group, conducts public opinion polling for media outlets, as well as corporate, non-profit, and yes, political clients. While our political leanings are Republican, our clients - corporate or otherwise - care about one thing: getting accurate data.

Generally speaking, increased access to public opinion data is a good thing, as public opinion research provides valuable insights into the mood of the electorate and can help identify trends that break down along racial, geographic, socioeconomic and gender lines. While the sheer number of polls being released to the public has been on the rise, so have the number of critics crying foul over the assumed biases of so-called "partisan polling" – that is, polls conducted by partisan pollsters or sponsored by partisan-leaning organizations. But while some of the critiques of partisan polling are well founded, the distinction of having a partisan connection or affiliation does not in and of itself mean the results aren’t valid, as some would have you believe.

Partisan vs. Non-Partisan Polling

A recent column entitled “More Polls in Arkansas, but No More Clarity on Senate Contest,” by Nate Cohn of the New York Times, made the rounds among Arkansas political junkies a few weeks ago. In it, Cohn essentially argued that partisan polls shouldn’t be trusted. In fact, he went so far as to say partisan polling has been given “too much credit” in gauging the state of the Senate race, and implied that by nature partisan pollsters may “cook polling results”.

In my view, Cohn is half-right. While I would agree that pollsters have the ability to, as Cohn says “move the results” one way or the other - whether that be for an unbiased purpose such as demographic weighting for turnout modeling purposes, or “cooking” the numbers by introducing bias intentionally – I disagree with the broad implication that “partisan pollsters” are inherently predisposed to doing the latter, rather than the former because of their partisan connections.

Contrary to popular belief, just because a poll carries a “D” or an “R” designation by its name, doesn't mean that the results are immediately questionable. In fact, in many cases, “partisan” polls have been shown to closely mirror their non-partisan peers.

Comparing Apples To Apples

Take for example the four most recent polls conducted in the Arkansas Senate race, as listed by Real Clear Politics. 

All four polls found U.S. Rep. Tom Cotton leading U.S. Sen. Mark Pryor by a small margin in the race for Senate. According to PPP (Democratic) [3] and Talk Business-Hendrix College (Nonpartisan), Cotton led by 2 points, while Impact Management Group (Republican) [4] and CBS News/New York Times/YouGov had Cotton ahead by 4 points. Not only were the partisan and non-partisan polling results similar in their findings, the results were comparable among polls conducted during similar time periods – adding another level of credibility to the surveys.

The same correlation between partisan and non-partisan polling also holds true in the latest round of polling in the Arkansas gubernatorial race. Once again, PPP (Democratic), Talk Business/Hendrix College (Nonpartisan) and Rasmussen Reports (Republican) found Republican Asa Hutchinson leading Democrat Mike Ross by 6 points, 5 points, and 7 points, respectively. CBS News/New York Times/YouGov (Nonpartisan) had the race closer, but still had Hutchinson leading by 3 points.

So while critics are apt to point out the partisan connection that some pollsters and poll sponsors have, it defies reality to conclude that any presence of a partisan connection instantly disqualifies a poll’s results from being legitimate. In fact, in many cases the real reason that some critics decry partisan polls isn’t because of any real bias included in the polling, it’s because they don’t like results and they feel compelled to discredit it in the public’s eye by any means necessary.

Does that mean that polling doesn’t ever get manipulated for partisan reasons?  Absolutely not. But fundamentally, a reputation for providing credible, accurate numbers is a pollster’s greatest asset. “Cooking the books” might provide a pollster’s client with a short-term momentum or fundraising boost, but in the long run, playing that game will only jeopardize that pollster’s credibility and ultimately their business.

A Better Measuring Stick

So how does one distinguish a faulty, or biased poll from a good one?

Rather than looking for an automatic disqualification based on a partisan connection, anyone trying to gauge a poll’s validity should instead ask themselves the following questions: 

  1. How does the poll measure compared to others conducted over the same, or similar, period of time?
  2. Did the pollster, or interest group sponsoring the poll, release the actual questions or did they just release a summary of few hand picked numbers?
  3. Are the questions written objectively or do they introduce bias (intentionally or otherwise)? [5]
  4. Were the demographic and socioeconomic breakdowns of the respondents relatively close to voting population data?

As campaign season heads into the final stretch this fall, and the polling frenzy kicks into an even higher gear, I would argue that for anyone truly interested in getting a real snapshot of the state of play, the answers to those questions have far greater bearing than a partisan connection in determining whether a poll is worth the paper it’s printed on.

[1] Or in some cases “perceived” momentum.

[2] Even from organizations and pollsters they aren’t associated with or don’t know.

[3] Incidentally an analysis by researchers at Fordham University found PPP to be the most accurate predictor of the 2012 Presidential election.

[4] I am a Partner at Impact Management Group. Our firm conducts public opinion polling for corporate, non-profit, and yes, political clients.

[5] Of course measuring objectivity requires, well, objectivity…

(Robert Coon is a partner at Impact Management Group, a public relations, public opinion and public affairs firm in Little Rock and Baton Rouge, Louisiana. You can follow him on Twitter at RobertWCoon. His opinion column appears every other Wednesday in the weekly Government & Politics e-newsletter. You can subscribe for free here.)



Please read our comments policy before commenting.