Post-match: The 2020 "What Polling Got Wrong" Thread...

Discussion in 'Elections' started by Dr. Wankler, Nov 2, 2020.

  1. xtomx

    xtomx Member+

    Chicago Fire
    Sep 6, 2001
    Northern Wisconsin, but not far from civilization
    Club:
    Chicago Fire
    Interesting point.
     
  2. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    From the election thread...

    I forget which podcast talked about this, but one of the suggestions on why the polls were skewed wrong was that Republican voters just didn't pick up the phone when called. Another suggestion along those lines are that since there were a lot of new Republican registrations late, the pollsters had somehow not factored those in, or had not been able to get enough to be able to weight the polls effectively.

    If the latter is really the case, that is actually a good sign for Dems. The Dems got their shit together, early, and the Republicans were playing catch up. But the Dems were so far ahead that the Reps just couldn't get it done.
     
  3. ChrisSSBB

    ChrisSSBB Member+

    Jun 22, 2005
    DE
    Nat'l Team:
    United States
    Are people who pick up their phone whenever a strange number pops truly a random sampling?
     
    Q*bert Jones III, JohnR and Dr. Wankler repped this.
  4. JohnR

    JohnR Member+

    Jun 23, 2000
    Chicago, IL
    It seems that Democrats have strange habits.
     
    ChrisSSBB repped this.
  5. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Nate Silver suggested a number of these possibilities

    e.g Knowledge workers are more likely to be home during Covid. Dems were more enthusiastic, and therefore more likely to take a survey etc
     
    soccernutter repped this.
  6. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Listened to the recent 538 model talk.

    Nate Silver does not agree that the polls are so far off compared to expected.
     
  7. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    On its face, this seems like a ridiculous statement, but IMO, there is something to this. When you consider the election was run in a global pandemic with record turnout with an extremely unconventional candidate with people using methods of voting at rates they hadn’t before, how close should they be?

    I think that’s part of the reason 538 baked so much uncertainty into their model. Even more than last time when they expressed greater uncertainty than other aggregators. Being 6 pts off seems like a lot, but fundamentally that is only a couple of little tweaks away from being reality. A couple points of relative turnout error. 1 in 50 voters switching their soft support from one candidate to another, etc is all it really takes.

    I need to listen to their take, but if the argument is that people place far too much certainty on polls, then there is a lot of truth to that. If you look at them with rarely high confidence intervals going in, you’re likely to believe that Biden was favored in WI/PA/MI, but it’s not impossible to believe that those states are close. Biden was favored in GA/FL/NC/AZ, but it’s not impossible to think that those states may be split. Races in places like OH/IA/TX may be close, but those states may revert a bit to their partisan bias. The polls were off, but not to the extent that they didn’t adequately describe the fundamentals of the race.
     
  8. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    Still haven’t listened, but to expound upon that a bit, people have a hard time:

    1) focusing on what is important. From the Presidential election, only 9 states and 2 CDs were polling within 5pts of the tipping point. That’s it. Nothing else was important. All of those 41 states 5pts or more from the tipping point went exactly as forecast.

    2) understanding uncertainty. 538 put a win probability on every one of those 9 states. It was literally the first thing people saw if they clicked on a state forecast page. It’s spelled out right there. The other implied standard deviation on the poll aggregation varied by state but was on average 5pts. If a state was polling +8 Biden, it’s not inconceivable that it would be much closer or even go Trump. Every state at the tipping point (PA at +4.7D) and to the left went Dem. 2 of the 4 states to the right (NC and FL) went GOP. The other 2 (AZ and GA) went Dem. The outcome suggested underperformance via the median model expectations, but it was not materially off.

    It really is the same story as 2016. People have a hard time with uncertainty even when the odds are explicitly disclosed. There is also a disconnect between poll MOE (often at 3pts), which is more a function of randomness of polling that assumes no bias in who responds and what some modelers/aggregators like 538 are incorporating. They’re assuming there could be random sample issues but they also consider that there could be additional error outside of basic sampling issues.

    People are looking for something that is more precise (and comfortable) than what can realistically be attained.
     
    JohnR repped this.
  9. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    I've got that in my queue and will check it out today.
     
  10. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    One of the things Nate says on a regular basis, and probably did in his latest, is that for Presidential election predictions, statistically there is a relatively small sample size to model on. The inference I get from that is that he tries to figure out the irregularities when there is really very little in the way of examples.
     
    The Jitty Slitter and ChrisSSBB repped this.
  11. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    PSA did a poll/survey a couple weeks before the election, and one of the things I found really interesting is that those voters who supporter Individual One the most got their news from Fox, OAN, etc. Conversely, those who supported Biden the most were higher on the MSNBC, but not nearly to the level of Individual One. This, to me, suggests a "fanaticism" (as opposed to enthusiasm) for Individual One. IOW, devotion rather than excitement. I'm wondering if something like that could be included into the polling models.
     
    superdave repped this.
  12. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Yes

    So he was saying the 2018 polls were good, whereas 16 was not so good. And at the end of the day, this miss was within the usual kind of range. But it depends a lot on which elections you include.

    e.g. 2012 was off in a big way, but because the miss was in Obama's favour, no one comes up with the idea of shy obama voters
     
  13. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    I was doing DIY at the time so not 100% tuned in, but I think he expected 4% error as being the normal range (extra for the pandemic)

    So on that basis, PA and WI are not actually big outliers in the error curve, but in the fat part of possible outcomes
     
  14. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    Yes this is what he seemed to be saying

    PA for example, is well within the range of expected outcomes.

    The House stuff is more complicated, but again he explained that we can expect these races to all break in the same direction. Again I think in the reddish state races, with a double wave, Dems simply run out of voters, no matter how enthusiastic the base is.

    Also re turnout, final turnout is very close to what 538 ended up predicting.
     
  15. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    I don’t think sample size has much to do with it. For example, if we had a giant bag of marbles identical in every way except color (red and blue) you’d only need to pull 1070 marbles to keep the margin of error below 3% (95% confidence or 2 standard deviations). A sample of 1070 works on a population of 500 million almost exactly as well as it works for a population of 500 thousand. Almost all of the problem in political polling is biased sampling. It’s a world where different marbles can hide as you pull samples from the bag. They can even jump in or out of the bag without being detected.

    If a guy who watches Fox News is materially less likely to respond to a poll, it doesn’t matter very much if you poll 1100 people or 110,000 people. The sample will still be biased.

    If standard error = 4%, then the implied MOE would be 8%. This can vary across states and he’s using some sort of fat tailed Bayesian thing that is beyond me, so it’s entirely possible that a 4% std error in his model translates to a 5% std error for the states nearest the tipping point in a normal distribution.

    I think the problem is really in pollster MOE disclosures. This gives people a false sense of certainty when they look at polls. 3% in a theoretical marble draw can grow to 8% nationally and 10% in states that are demographically and politically tricky. But the poll MOE disclosure is severely limited to the theoretical marble draw rather than real world limitations.
     
    EvanJ, song219, superdave and 1 other person repped this.
  16. The Jitty Slitter

    The Jitty Slitter Moderator
    Staff Member

    Bayern München
    Germany
    Jul 23, 2004
    Fascist Hellscape
    Club:
    FC Sankt Pauli
    Nat'l Team:
    Belgium
    I'd recommend to listen to the podcast or check 538 so i don't conflate things. Summary blog here.

    https://fivethirtyeight.com/features/the-polls-werent-great-but-thats-pretty-normal/
     
  17. JohnR

    JohnR Member+

    Jun 23, 2000
    Chicago, IL
    The problem with the polls is that the errors weren't random. Every highly rated pollster according to 538's ratings came in at between +5 and +7 for its final PA poll. Meaning they were all off by roughly the same amount in the same direction. They were herding.

    Now, if they were taking somewhat different approaches so that their errors had no pattern, then we could average their results and come up with a good answer. But when they herd like that, there's no benefit to having multiple pollsters.
     
    jmartin1966 repped this.
  18. jmartin1966

    jmartin1966 Member+

    Jun 13, 2004
    Chicago
     
  19. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    #119 Chicago76, Nov 15, 2020
    Last edited: Nov 15, 2020
    Yeah. I had time to have a listen. Three things:
    1) They didn’t say as much as I thought they would.
    2) The did talk abt the polls being more wrong in the Senate, which suggests lack of conservative response in polls rather than shy Trump voter.
    3) The other guy’s voice sounds more like Nate Silver’s voice in real life than Nate Silver here.

    Edit: one more thing that was probably the best bit the other guy said. If the non-college degree voter who is more likely to respond to your poll is a Dem voter, the fact that you now have the Ed attainment weighting correct in 2020 v 2016 doesn’t really help you.

    In summary, polls weren’t that far off. I forgot to double the typical 3pt MOE to get the change in the actual margin, ie a 53-47 lead could actually be 50-50 or 56-44 in the actual margin. So polls baked in a 6pt swing (ignoring third party), 538 an 8 pt swing and in key states a 10pt swing.

    Two types of error here: random sampling error and pollster design error. The former is more or less random. The latter relates to finding methods to tease out the notion that the people who are answering your questions aren’t representative of the people voting. If less trustful people are both more likely to be conservative and less likely to participate, it’s not herding in the traditional sense. The best polls in FL ranged from +4D to +2R. There’s a fair bit of variance there. The problem is that tried and true methods good pollsters use to get an unbiased sample aren’t really working in this instance (at least) or possibly this type of political environment.
     
  20. soccernutter

    soccernutter Moderator
    Staff Member

    Tottenham Hotspur
    Aug 22, 2001
    Near the mountains.
    Club:
    Tottenham Hotspur FC
    Nat'l Team:
    United States
    I'm not sure this is about what I'm talking about. What I'm talking about is not some bag of marbles, but the final bag of marbles, and how many times the final bag of marbles was drawn (in this case, once every 4 years). Since, statistically, Nate is arguing that there are not very many final bags of marbles in the sample, it is difficult to predict irregularity in voting behavior for the next final bag of marbles.
     
    Chicago76 repped this.
  21. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    Gotcha. That’s absolutely true. It’s not really the size of any particular polling sample. It’s the frequency of the event. We don’t know the relative turnout/weighting, ie, which marbles managed to jump into or out of the bag. That’s what I think you’re getting at. We also don’t know the relative propensity of particular groups to participate in a poll, ie which marbles are trying to avoid being grabbed vs those who are happy to hang out in parts of the bag where they’re more likely to be selected.

    Both of those things are important and unfortunately (or maybe fortunately) we only do this every 4 years. Each election has a slightly different dynamic and the marbles (both inside and outside the bag) are constantly changing.
     
    soccernutter repped this.
  22. xtomx

    xtomx Member+

    Chicago Fire
    Sep 6, 2001
    Northern Wisconsin, but not far from civilization
    Club:
    Chicago Fire
    I am so sick of pollsters and Nate Silver and all of that nonsense.
     
    sitruc and crazypete13 repped this.
  23. celito

    celito Moderator
    Staff Member

    Palmeiras
    Brazil
    Feb 28, 2005
    USA
    Club:
    Palmeiras Sao Paulo
    Nat'l Team:
    Brazil
    I call all of this pollsplaining. :ninja: :ROFLMAO:
     
    xtomx repped this.
  24. chaski

    chaski Moderator
    Staff Member

    Mar 20, 2000
    redacted
    Club:
    Lisburn Distillery FC
    Nat'l Team:
    Turks and Caicos Islands
    What if you lose your marbles?
     
    xtomx repped this.
  25. Chicago76

    Chicago76 Member+

    Jun 9, 2002
    What is so nonesensical about it?
     

Share This Page