Public Polling’s Predicted Blue Wave Meets Red Reality in Texas
By Brad Johnson
The Texan, Published November 6, 2020
The election four years ago became almost synonymous with erroneous polling. While the national popular vote polling was within spitting distance of the outcome — which matters not a lick in this federalist electoral system — the polling in a handful of key swing states were errant enough to get the Electoral College outcome substantially wrong.
This go-around, public polling — meaning publicly available, mostly media-based polling aiming to take a snapshot of voters’ opinions at a moment in time — had just as bad an evening, if not worse.
And Texas was a focal point.
Unofficial results show President Donald Trump emerged victorious by 5.8 percent in Texas. Substantially lower than his 2016 margin of nine points and significantly dwarfed by Mitt Romney’s 15-point 2012 victory in the state.
But the RealClearPolitics (RCP) polling average — an aggregated average of all public polling — projected a Trump +1.3 environment. RCP’s average is not a proprietary poll, but a calculation of other polling outfit’s results.
These outfits include national firms such as Quinnipiac, Emerson, Rasmussen, UMass Lowell, and the NY Times/Sienna.
Averaged out, Quinnipiac’s polling had Trump at +0.2 percent throughout the race — a virtual dead heat. Similarly, Emerson’s average projected Trump +0.75 percent. UMass Lowell’s averaged out to Trump +2.5 percent and, closer than most other national outfits, Rasmussen’s one poll from early October projected Trump +7 percent — which was errant in the other direction.
Local outfits had more mixed results.
The Dallas Morning News was wildly off, with its average projection of Biden +0.5 percent, and its most recent poll to Election Day had Biden up three percent in the state.
The University of Texas and the Texas Tribune’s polls averaged out to Trump +5.25 percent, strikingly close to the final result.
A University of Houston poll conducted from October 13 to 20 had Trump +5.3 percent.
In the other big statewide race, the U.S. Senate, Sen. John Cornyn (R-TX) walloped Democratic challenger M.J. Hegar by 9.8 percent. The Senate polling was less mistaken than in the presidential race, but the RCP average was still 3.3 points off.
The NY Times/Sienna was nearly spot-on in their late-October poll, putting the race at Cornyn +10 percent. Emerson, meanwhile, had Cornyn +5 in their home stretch poll.
Quinnipiac, despite its wild error in the presidential race, was not so far off in the Senate race with an average of 7.6 percent. However, their final poll, conducted from October 16 through 19, projected Cornyn +6. UMass Lowell was about the same.
Rasmusssen’s one poll showed Cornyn +9.
Locally, the Dallas Morning News erred the opposite way, but less severely than its presidential portfolio, averaging Cornyn +11.25 percent while its October poll projected Cornyn +8 percent. The University of Houston’s October survey showed Cornyn +7 percent and the University of Texas/Texas Tribune’s had Cornyn +8 percent.
Broken down between national and local, the average of national outfit’s Texas polls were off substantially. Their averaged polling collection projected Trump +1.8.
Locally, they didn’t fare much better at a Trump +2.5 average. However, when the Dallas Morning News polls are removed as potential outliers, the average jumps to a much more accurate Trump +5.25 percent.
Derek Ryan, Texas political consultant and founder of Ryan Data & Research, highlighted the local-national divide, telling The Texan, “The local pollsters were much more accurate. They have a far better understanding of the state than do pollsters thousands of miles away.”
With the exception of the Dallas Morning News, Ryan is spot on.
“I’d trust any Texas polling firm more than a national one,” he underscored.
Chris Wilson, CEO of conservative data and polling firm WPA Intelligence, told The Texan, “This was a tale of two stories: the media polling was historically bad, but the private polling I was privy to was really quite good.”
One of WPA’s polls in Nevada, done for the Las Vegas Review-Journal, showed the state very close, contrary to much of the publicly available polling which projected an easy Biden win, and the state is still too close to call with Biden in a narrow lead.
Wilson expounded on an evolution he sees within the GOP polling industry by which more accurate turnout models are built by abandoning the voting history focus and moving more to an algorithm-based voter score model.
The scale he and WPA use is zero to one, and any voters above a 0.5 rating are likely voters while those below are not.
Building an accurate turnout model is hard work. Ryan stated, “It’s getting harder and harder to get prospective voters to answer polling calls. Oftentimes it can take 30 to 40 calls to get one answer from the identified voter.” This all compounds the time and resources necessary to complete the job.
Wilson, as a pollster himself, employs a varied approach which does include traditional phone calls from 6:00 p.m. to 9:00 p.m. However, when it becomes difficult to reach an identified voter, WPA and the other polling outfits he works with often text or email the voter in advance to set up a phone interview. That way the voter can pick the time that works best for them.
Just as became popularly known after the 2016 election, an errant turnout model will swing results vastly. In short, lots of voters who had little-to-know voting history showed up to vote for then-candidate Donald Trump, who were not accounted for in many of the turnout models.
This time around, a late-October Washington Post-ABC News poll showed Democratic candidate Joe Biden up 17 points in Wisconsin. When the dust settled on Wednesday, Biden had won the state only by 0.5 percent. “It’s difficult to argue that a poll such as that did not discourage in some respect Wisconsin Republicans from turning out — when your guy is behind that much, it’s dispiriting,” Wilson added.
Media polling, Wilson stipulated, is different than private industry polling because the latter faces much more competition. If a pollster gets their client’s race wrong, that reputation builds until, eventually, they’re out of business. “The attrition from just a short time ago of polling outfits is immense, and it’s because if you don’t get it right, nobody will hire you.”
Media outlets’ financial security does not hinge on the accuracy of their polling.
He continued, “I see two crimes among the media polling, one of omission and one by commission.” Omission constitutes errant assumptions of who will turn out to vote. By just analyzing voting history, pollsters cannot account for drastic voting population changes in their models. In 2016, that was the aforementioned Trump voters with next to no voting history.
This year, the models were wrong in not accounting for a proportional turnout uptick among Republicans as was projected for Democrats. In Texas, millions of new Democratic voters turned out to vote, but so did millions new GOP voters. And so, the “Blue Wave” projections turned out to be no more than a ripple.
Ryan’s early voting reports illustrated this trend. His final report showed that 28 percent of early voters had Republican voting histories whereas only 22 percent had Democratic voting histories. No turnout advantage materialized at all for Democrats, which became apparent Tuesday as Democrats made virtually no electoral gains in Texas.
“The crimes of commission are where a firm starts out to build a narrative. Most media firms don’t do that, but some do such as Public Policy Polling,” Wilson continued. In that effort, firms will ask a loaded question on the front end that inevitably shifts the opinion of the respondent — also known as a push poll.
When analyzing the veracity of polls, Wilson and Ryan offer some advice.
Ryan points to the question phrasing, knowing that if a push poll-type question comes up, the poll is likely bunk.
He also looks to the “cross tabs” or breakdown of the responses based on things like age, race, and partisan history. The weight placed on those categories, compared with comparison weighting, is another factor Ryan identifies.
“If your poll has a higher rate of urban turnout versus suburban and rural and the environment doesn’t reflect that, then it’s going to be off,” he added.
If obvious errors arise in the demographic makeup, then that calls the poll into question, which goes hand-in-hand with Wilson’s point on voter sample construction.
For Wilson and WPA, they have an employee whose job it is to find reasons within a poll that show it to be wrong. Surface level exploration can be done looking at the margin of error and the sample size, but that only gets one so far.
Wilson keys in on the more in-depth methodology in terms of how the sample is built, which isn’t always publicly available. But in those cases for which it is, on balance, a poll that used voter probability — the aforementioned scale methodology — as opposed to voter history is a good place to start.
An example of this played out in Texas with Hispanic voter projections being so off, specifically within the southern border region. Numerous counties that voted heavily for Hillary Clinton in 2016 narrowly went for Biden this time, and some even flipped to Trump entirely. That exposed errors in the modeling. It also reflected the increased difficulties of getting likely voters on the phone.
When analyzing a poll, Ryan further underscored, “I always ask, who’s releasing the poll? Do they have a reason to release it?” Many of the polls that appear in fundraising and outreach emails from campaigns are designed to motivate supporters into opening up their wallets or getting out the vote.
After the 2016 results, much of the public polling industry engaged in introspection, hoping to find the errors within their practices. And after this year’s election, and the polling which led up to it, that introspection is still left wanting.
This article was originally published here.