Wednesday, November 07, 2012

Creative non-analysis: Unskewed was not wrong!

Ok.. Rant warning.

I've saved the state by state prediction made by the widely cited site.  It was not just that it was off in its overall prediction, but it was off in literally every state deemed as a 'battleground', and in a fair number of states where the results were never in question.  How wrong it was is not news any longer.  In fact the alternative universe version of Nate Silver behind the Unskewed site has gone on record today with a very wonkish mea culpa:  BusinessInsider: 'Unskewed' Pollster: 'Nate Silver Was Right, And I Was Wrong'

So points for having the guts to come out with that kind of statement.  My feeling is that it was unnecessary.  Seriously. 

Unskewed was just part of a cottage industry of punditry that rained down on Nate Silver and his 538 based predictions.  It sure seemed that the only real complaint against his results was that some didn't like what the numbers were saying.  There clearly are a lot of issues with polling, surveys and the business of prediction. Even the 'hardest' of data has a lot more issues than the public cares to ponder. There likely were, and likely still are, all sorts of issues with error in all polls.  There are legitimate debates still to be had over Silver's methodology even after the accuracy it showed with Tuesday's results.   Maybe he was just lucky?
But here is the thing.  Even if the legitimate critiques of Silver's methodology were stronger, it does not give credence to the idea that you can just make up alternative numbers; certainly not numbers that are equivalent to what Silver was coming up with. The Unskewed methodology was not flawed, it was fiction.  The regular commenter here MH actually pointed out the single biggest error in the Unskewed methodology, though calling it 'error' implies a real methodology even existed.  In essence Unskewed was assuming most of the result it was purporting to predict.  If you tried that in Excel it tells you your formula is circular.  No such error-check in punditry, which is all it was in the end.  Punditry confused with a bunch of numbers that may have appeared to mean something, but they didn't
We need to give a name to this phenomenon though.  I am pretty sure that without Silver's rising profile via 538, there would have never been an Unskewed site to begin with.  The mere mention of Unskewed here is one small reflection of the site's noteriety.  Noteriety that was generated as a reflection off of what 538 was doing. 
This is a huge problem and not just in poltical polling. There is good analysis and bad analysis out there.  There are almost always innumerable data sources out there for any issue.  I feel for my journalist friends actually who have to report on date-intensive topics.   They can't be experts in all topics, but without a certain degree of expertise, you almost can't navigate between the good, the bad and the ugly. When all else fails you rely a large part on credentials and past practice.  So again, Silver has a long history in applied statistics, and an undergraduate degree in economics mind you.  Anyone tell me what in Unskewed's background makes him qualified to be doing quantitative work of any kind.  Ad hominum I know, but lacking any other rationale for ever using the site I am not sure what to look at.  The problem in the end was not unskewed's results or methodology, but that anyone bothered to reference it in the first place. 

Ok. I'm moving on now.  I swear.  Off to only looking at hard numbers, like say employment counts. No fuzziness there.  


Anonymous BrianTH said...

The argument for "unskewing" the polls started with the premise that the relevant pollsters were assuming as part of their methodology, rather than reporting as part of their findings, the partisan sentiments of the electorate. That should have been an easy premise to rebut--the relevant pollsters certainly tried--but since it fit with the more general premise that the "mainstream media" is constantly trying to deceive the public in every way possible as part of an ongoing campaign against Republican Party interests, and because the market for "unskewed" polls didn't want to believe what the pollsters were reporting anyway, that premise remained durable.

I note all this because I don't think that should have been too hard of a debate for even math-challenged media professionals to evaluate: the pollsters themselves should have been considered an authoritative source on what they were assuming versus what they were reporting. So I more see this as part of the general problem with the political media uncritically defaulting to a he-said/she-said position, as opposed to a problem where evaluating the competing analysis really would require some amount of sophistication.

Thursday, November 08, 2012 9:45:00 AM  
Anonymous MH said...

Were they really confused about that? I think the idea was more that polls were stratifying by demographics and needed "unskewed" because that stratification was the cause of the partisan make-up of the sample. In other words, the main confusion was that party ID was taken, by the unskewers, as similar to a demographic characteristic, not a political opinion. I assume somebody read somewhere that party ID is fixed early in life and missed all the caveats/nuance in that research.

Thursday, November 08, 2012 10:10:00 AM  
Blogger tonycpsu said...

BrianTH is right on. Today's conservative movement will find anyone to tell them what they want to hear. Dean Chambers saw an opening for a "Nate Silver of the Right" (as if Nate Silver is of the left) so he jumped on the opportunity, just like Dick Morris has made a nice career out of telling the conservatives what they want to hear (he was predicting a Romney landslide as well.)

Frank Newport (Gallup's editor-in-chief) was on Marketplace yesterday, and Kai Ryssdal, to his credit, asked some tough questions about how poorly Gallup did this year. (I think they were tied for 19th with Rasumussen or something like that in terms of accuracy.) Newport responded by basically complaining about how the aggregators are using their data, and if everyone starts paying more attention to the aggegators, the pollsters won't have a market for their services.

This sounded strange to me -- surely NYT has to pay the pollsters for the detailed internals of the polls? If your business model was "giving data away for free and hoping people don't aggregate it into more reliable data", then I don't think you really had a business model.

Thursday, November 08, 2012 10:14:00 AM  
Anonymous BrianTH said...


I'm not sure we are saying different things.

It very much is true that pollsters reweight their samples in order to hit certain demographic targets, because that is the only way they can get something roughly equivalent to a truly random sample. And in fact there are legitimate disagreements among pollsters about how exactly to do that, and that is part of what explains how some pollsters did better than others in this election (Gallup, in particular, really blew it on this issue).

But as I believe you are also noting, generally the pollsters considered ripe for "unskewing" were not targeting any particular partisan breakdown during this process, but instead were simply reporting the partisan sentiments in their sample once it had gone through this process,

However, that doesn't make it untrue that their reported partisan findings were influenced by their demographic reweighting, because ALL their reported findings were influenced by their demographic reweightings. But again, it was crucial to understand that the relevant pollsters were not consciously choosing to hit a certain partisan breakdown, but instead were reporting what the partisan sentiments of the sample looked like once the demographic reweighting had been done.

In any event, my point was just that this is simply an issue about what methodology the relevant pollsters were actually using, and that is something anyone in the media should have been capable of determining through conventional reporting: you just ask the pollster.

Thursday, November 08, 2012 10:45:00 AM  
Anonymous BrianTH said...


I believe the worry is something like this. The reason newspapers and other media outlets are willing to pay for polls is that those poll findings are considered by the public to be important news, and therefore can drive traffic to the particular media outlet when the findings are reported. However, if everyone starts treating individual poll findings as not particularly newsworthy in isolation, and instead waits to see how they affect the aggregation of polls, then media outlets will lack the incentive necessary to pay for polling.

Personally, this strikes me as a purely hypothetical concern that largely ignores human nature. I think even people who view the aggregators as more reliable than individual pollsters still look at individual polls when they are first published. In fact, if anything, I think the people tracking the aggregators may well look at more individual polls than most politically-interested people, precisely because they have some sense of how a new individual poll may influence the aggregations they are tracking.

In other words, you may understand that a batting average says more about a hitter than the result of a given at-bat, but the kind of people who care a lot about batting averages probably are also more likely to watch the individual at-bats of hitters they care about, particularly when something is on the line (say, who is going to win a batting title).

So I highly doubt the growing popularity of aggregators will ruin the incentives of media outlets to pay for and publish new polls, and if anything I would actually guess the opposite is true.

Thursday, November 08, 2012 10:58:00 AM  
Anonymous BrianTH said...

By the way, this WOULD have taken some sophistication, but the appropriate story to tell based on all this would have been something like this:

A lot of pollsters were finding that once they applied both their demographic reweightings and their likely-voter screens, by the end the likely voter population was looking demographically favorable for the Democrats (and Obama)--perhaps surprisingly so in light of what many people had been assuming turnout would look like as a result of depressed Democratic enthusiasm.

Properly understood, that was a finding of the polling, not an assumption of that polling. And it is a finding that turned out to be correct, and that (perhaps surprising) real world dynamic arguably ended up determining the outcome of the election.

So there was an opportunity here for the media to look at all this discussion of poll weightings and report something very interesting in advance of the election, but unfortunately that opportunity got drowned out by the media's he-said/she-said reporting on the partisan spinning, reporting on the personalities involved, and so on.

Thursday, November 08, 2012 11:13:00 AM  
Anonymous Anonymous said...

Bingo, Brian. The "unskewed" people believed that the pollsters were misreading their data, and that those who showed up at the polls would look like those who showed up in 2004, not 2008. Despite the data saying otherwise.

The more interesting story is that the Romney campaign also beleived that the electorate would look like 2004, which was why polls allied with it (Rasmussen, Gallop) saw the race as a "toss-up" or leading Romney.

So, it really was much more than "unskewed". The Romney people literally bet the race that they were right--basing their advertising and polling on a hopelessly out of touch model. What a way to waste three-quarters of a BILLION dollars, eh?

Thursday, November 08, 2012 8:08:00 PM  
Anonymous Anonymous said...


Friday, November 09, 2012 9:17:00 AM  
Blogger Bram Reichbaum said...

I have a different take. I think it was an unintended consequence that many on the right *believed* unskewed polls and "unskewed thinking". I think they knew they were peddling optimism.

But one theory about undecideds says, many of them want to vote for the winner. It makes them feel good about themselves, to be on the side that picked the winner. And besides, everyone likes a Comeback Kid story. So they fed the optimism as a strategy.

On election night, Karl Rove simply didn't mind delay Obama's acceptance speech and making the memory of the election seem closer than it was for mandate purposes. Megan Kelly for example saw right through it. Can't help it if Bill O'Reilly and some others in that environment got swept up; collateral damage of a strategy than in a closer or more different election (sans Sandy?) might have worked.

Friday, November 09, 2012 9:19:00 PM  
Anonymous BrianTH said...

There are a number of competing, but not necessarily mutually exclusive, explanations for why "optimistic" polling analysis took hold on the right. On the media side of the right-wing infotainment industry, they definitely have economic incentives to feed their market whatever it wants to hear, and they may also have believed it served to maximize the odds of their preferred outcome. On the consumer side, some of it might have been epistemic closure, and some it might have been intuitively or even consciously strategic.

However, I also think it is possible some of it was tribe-membership signalling. In fact if you followed the online discussions of polling in popular right-wing forums, you would have observed that anyone who tried to raise cautionary notes about "unskewing" would rapidly be attacked personally, including with accusations of false-flag posting.

Sunday, November 11, 2012 10:48:00 AM  
Blogger C. Briem said...

I guess my point was more why those who were arguably trying to be objective bought into the unskewed content. (I refuse to call it analysis.) I can't find it quickly, but there was some story on how NBC news felt pressure at one point to apply the unskewed 'recalibration' to its own reported numbers.. all without any basis for doing so.

To put it bluntly.. Unskewed is to the field known as statistics as alchemy is to chemistry.

Sunday, November 11, 2012 8:31:00 PM  

Post a Comment

<< Home