Post-match: The 2020 "What Polling Got Wrong" Thread...

Discussion in 'Elections' started by Dr. Wankler, Nov 2, 2020.

  1. xtomx

    xtomx Member+

    Chicago Fire
    Sep 6, 2001
    Northern Wisconsin, but not far from civilization
    Club:
    Chicago Fire
    Not nonsensical, nonsense. I am sick of the whole nonsense that surrounds polling.

    I really do wish to discuss, but suffice it say that I am glad I won't have to hear the name "Nate Silver" for the couple of years.
     
  2. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Maybe don't post in the polling thread then?
     
  3. xtomx

    xtomx Member+

    Chicago Fire
    Sep 6, 2001
    Northern Wisconsin, but not far from civilization
    Club:
    Chicago Fire
    #128 xtomx, Nov 16, 2020
    Last edited: Nov 16, 2020
    Do you mean I should not post concerns about the nonsense surrounding polls in the

    "2020 What Polling Got WRONG" Thread?

    One would think that would be the point of such a thread.

    It is why I corrected the comment accusing me of calling polling nonsensical, which I did not.

    Despite stating I wished to discuss the issues I have with polling in the same post, you are saying I should not, so I will not.

    Thanks.
     
  4. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Well if you don't want to hear the name Nate Silver for several years, this is probably not the thread for you? :p
     
  5. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Yes - and actually polls did well in '18 so the idea they are 'failing" is a bad hot take

    I think it is also interesting to compare Trump '16 vs Trump '20 and forget about '18

    In that frame, Trump lost ground in every dimension, while increasing his turnout
     
  6. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    Sorry to have accused you of anything here. I don't see the distinction between nonsensical and nonsense. If you care to expound on what you deem to be nonsense, I'm all ears.
     
  7. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    If someone had a reasonable level of knowledge of US election dynamics, lapsed into a coma 4 years ago and woke up the day before Election Day having not been exposed to all of the conjecture over the last 4 years, I think the polls and models like 538 would be viewed much more credibly. Upon waking up, you'd probably conclude to focus only on the states that were within 10 pts of flipping in 2016. Those 17 states by group:

    1) Three states you could immediately disregard (CO, VA, NM): the model would tell you all 3 are expected to go Dem beyond the model error. 3/3
    2) The 4 states that went HRC that are still within the model error (NH, ME, MN and NV). Biden was favored to win all 4 at p>85%. He won all 4. The model also correctly identified the most uncertain state (NV). 4/4
    3) The three states (WI, MI and PA) were Biden was a strong favorite to flip (p = 85% to 95%). He flipped all three. 3/3
    4) The three states that Trump was modestly forecast to win (TX, IA, and OH). He won all three. 3/3
    5) The four states where Biden was narrowly favored (GA, NC, FL, and AZ). The projection held 2 out of 4 times. 2/4

    The model went 15/17 in states that were at all competitive in 2016 and it went 5/7 in the most uncertain states. In all 17 states, the actual margin was within the model margin. Even better, by expanding the uncertainty from stated poll MOE, the 538 was able to highlight the uncertainty via their win probabilities. Had they ignored the greater uncertainty, three of their estimates would have fallen outside the model error (IA, OH, and WI).

    There are reasonable criticisms of polling and methods out there, but as you mentioned, the idea that they failed is a bad take.
     
  8. JohnR

    JohnR Member+

    Jun 23, 2000
    Chicago, IL
    Hmmm.

    Well, measuring on won/loss outcomes is one way of scoring the outcome, but not the only way. Another way is to note that for two consecutive elections, the polling models were about 5 points wrong, each time in the same direction, for four closely related (and very important) states: Wisconsin, Michigan, PA, and Ohio.

    That needs fixing. The other 46 states, eh some inaccuracies here and there, and the national averages were somewhat too high for the Dem candidate in both elections, but those results strike me as being defensible. The Northeast Midwest, or whatever we chose to call those four states, let's just say that if those polls were my doing, I would be embarrassed. And if my boss fired me, I would understand why.
     
    Dr. Wankler and soccernutter repped this.
  9. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Agree

    the forecast was not bad overall.
     
  10. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Thing is, that has been fixed. This time the issues are different. But also we never had an election in a pandemic before.
     
  11. American Brummie

    Jun 19, 2009
    There Be Dragons Here
    Club:
    Birmingham City FC
    Nat'l Team:
    United States
    I think this is a fair critique. Pollsters who got 2016 and 2020 wrong in the big states (Florida, you too!) need to be held to account. Their methods should become more overtly transparent.

    People like Nate Cohn who were as transparent as possible and still missed Wisconsin by double-digits...I dunno what else you do, aside from let random people on Twitch pick the phone numbers and emails to send the polls.
     
  12. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    I'm not disagreeing that the errors are skewed here, although I will point out that all errors fell within the confidence intervals. At least in terms of how 538 quantified them. For individual polls with narrower MOEs (3 pts per candidate or 6pts net), they need to do a better job of at least disclaiming/disclosing certain types of error. Random statistical noise will support a 6pt margin swing, but it doesn't touch on participation bias.

    In terms of stats and error generally, I think we need to distinguish between polling and continuously data rich statistical environments without discretionary participation of the things being measured. In actuarial sciences if your model is directionally wrong (even if within the MOE) on a consistent basis when it comes to risk assessing mortality/health care expenses by insuree cohort, that's inexcusable. Same thing with not incorporating climate change into shipping loss forecasts. If you're a market analyst and you aren't continuously incorporating the latest bond curves, equity volatility by industry, etc...that's really bad.

    This is not those things. You get 1 shot in a high turnout general election every 4 years. You don't have a way to require participation like applying for insurance/health screening, climate date, continuous measurement of financial data, etc. You are relying upon participants to be truthful. You are also teasing out the preferences of people who don't want their preferences revealed. And you don't get continuous real time results to recalibrate your assumptions and things change dramatically between real world tests.

    I would compare this to NCAA tournament seeding. Outcome of 8s vs 9s is a 50-50 proposition. Even a 5-12 is only 65-35. And the selection committee gets a very rich and complete data set where they know the scores of previous contests w venues, injuries, etc. They can chain results together to get quantify team strength. Election forecasting is kind of like seeding the the NCAA tournament field. One difference: in elections, the "selection committee" is making an educated guess on the scores of regular season games. Because some guy on Duke doesn't want his numbers in the box score.

    Pollsters need to find different techniques to elicit response or infer preferences, but people generally need to understand the limitations.
     
    hexagone repped this.
  13. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    Nate Silver says "boo." :p
     
  14. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    This gets at a couple of different problems which are being discussed, at large. The first is the prediction of who will win/won a state. The second is by what margin. And polls, which always have a MOE do both.

    So, on the first metric, the polls were quite accurate in predictions. Biden won the states he was predicted to, apart from NC and FL, and Individual One won the states he was predicted to. Overall, that is a 49 out of 51, which is quite accurate.

    But what is really getting everybody into a twist was the margin of predicted with versus actual win. Outside of the wonks (and people seriously discussing this), it seems most people look at the margin of win as a solid number and do not realize that there there is a MOE built in.

    Still, one area that has surprised a lot of people is the number of ex-President voters. Personally, I think there is some enthusiasm v fanaticism metric out there which could be used. I'm gonna try and see if I can find anything.

    edit - and what JohnR said.
     
  15. sitruc

    sitruc Member+

    Jul 25, 2006
    Virginia
    Can forecasts be wrong?
     
  16. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    All forecasts creating a continuous (as opposed to categorical/binary) output are wrong.
     
  17. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    The issues of pick the winner vs “cover the spread” are settled. The polls (when run through a wringer) still do a statistically good job of predicting the winner within the MOE but there is a consistent bias in not “covering the spread”. There’s nothing that needs to be said on any of that. The interesting offshoots:

    -communicating/disclosing uncertainty in an accessible way
    -creating methods/approaches/polling questions that identify and correct bias related to human behavior/participation/trust bias
    -understanding the limits of data in these scenarios vs something with more easily measurable data.
    -asking how much value there is to be gained by current methods vs simpler methods.

    You can make a KISS model that 1) takes the prior election state margins and 2) adjusts each of those margins by the difference of the actual prior election popular vote and the average forecast popular vote before this election. That model would generally underperform something like 538, but not by much. It would miss the same 2 states this time and 5 states vs 3 last time and the errors would be slightly larger but maybe a touch better distributed. A political scientist or statistician could look at a sheet and say 538 is better than KISS. But from a practical standpoint for the public, is it “better enough” to really matter?
     
    soccernutter repped this.
  18. American Brummie

    Jun 19, 2009
    There Be Dragons Here
    Club:
    Birmingham City FC
    Nat'l Team:
    United States
    All forecasts are wrong.

    Some are useful.
     
  19. JohnR

    JohnR Member+

    Jun 23, 2000
    Chicago, IL
    I encounter this in my work. Technically, no matter what the outcome, somebody assigning probabilistic forecasts can always say, "Well that was one of my probabilities." But if those outcomes over time don't end up anywhere near the assigned 50th percentile, then yeah those forecasts were wrong.
     
    The Jitty Slitter repped this.
  20. Dr. Wankler

    Dr. Wankler Member+

    May 2, 2001
    The Electric City
    Club:
    Chicago Fire
    Maybe I should've called this thread "What People Get Wrong About Polling."
     
  21. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    Alternate (Long) Title:

    “Academics and Media Can’t Figure Out How to Poll Party of Authoritarian Demagogue Who Despises Them”
     
  22. rslfanboy

    rslfanboy Member+

    Jul 24, 2007
    Section 26
    Come on mods! do your worst.
     
    crazypete13 and Dr. Wankler repped this.
  23. crazypete13

    crazypete13 Moderator
    Staff Member

    May 7, 2007
    A walk from BMO
    Club:
    Toronto FC
    I can do a lot worse than that.
     
    ChrisSSBB and Chicago76 repped this.
  24. rslfanboy

    rslfanboy Member+

    Jul 24, 2007
    Section 26
    Get on with it!
     
  25. EvanJ

    EvanJ Member+

    Manchester United
    United States
    Mar 30, 2004
    Club:
    Manchester United FC
    Nat'l Team:
    United States
    The Jerusalem Post wrote about a poll that showed Trump leading by 0.8 percent in Pennsylvania. That was in late October. When I saw the headline, I was surprised, because Biden was up significantly in the average of Pennsylvania polls. The poll Trump was leading in was by Trafalgar Group, and I knew from FiveThirtyEight that Trump did better in Trafalgar Group polls. Somebody in Trafalgar Group said that they did a better job of finding shy Trump voters than other polls. It's one thing to read speculation about why polls were wrong. It's more unusual for a pollster to believe they are correct and explain why they think other pollsters are wrong.

    They said that Trump improved in counties in which the winner in 2016 got at least 60 percent, and Biden improved over Clinton in counties where neither candidate got 60 percent in 2016.

    The last ten polls on Wikipedia had Collins down by an average of 3.2 percent. Nine of them did one poll for all four candidates and one poll for ranked choice voting with them bottom removed. Lisa Savage is far left, and Max Linn is so far right that he cut up a mask, so it's obvious that Savage's voters would prefer Gideon and Linn's voters would prefer Collins. Collins won by 8.9 percent. Collins and Linn combined to beat Gideon and Savage by 5.6 percent. The 3.2 percent came from including all four candidates, so to be consistent I will compare that to 8.9 percent, and that's a difference of 12.1 percent. That's big, but it's not as big as 8 + 10 =1 8 that you said.

    The Hispanics in the Rio Grande Valley had a giant swing towards Trump. According to the Texas Tribune, the counties in Texas that border Mexico combined to vote for Clinton by 33 percent and Biden by 17 percent.

    Most Jews are Democrats, but Orthodox Jews like Trump because he supports Israel. My district (New York 4) had an Orthodox Jew who won an award for it lose the Republican primary. One Senate candidate was born in Israel, and we knew she would lose by a ton. Merav Ben-David is a Democrat who lost Wyoming's open seat to Cynthia Lummis.

    I read that pollsters ask for the adult/registered voter/likely voter with the earliest birthday by date without the year (or something else random like that) so that people don't answer can still be polled if someone in their household answers.

    Fox News has more viewers than MSNBC. It's not because there are more Republicans*. It's because Republicans are more devoted to Fox News than Democrats are to MSNBC. That was true before Trump.

    * Something surprising is that Pennsylvania has a lot more registered Democrats than Republicans.
     

Share This Page