2017 Simulation: Ratings, Ranks, and NCAA Tournament Bracket

Discussion in 'Women's College' started by cpthomas, Aug 10, 2017.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    With all the schedules set, I have been able to do a simulation for the entire season that includes end-of-season ratings, ranks, and other data and a simulation for the #1, #2, #3, and #4 seeds, the Automatic Qualifiers, and the unseeded at large selections for the NCAA Tournament. If you're interested, go to the RPI and Bracketology for D1 Women's Soccer Blogspace and check out these three posts (I recommend you check them out in order):

    Pre-Season RPI Ranks and NCAA Tournament Bracket

    Simulated Rankings for the 2017 Season - Pre-Season

    Simulated Bracket for the 2017 Season - Pre-Season

    And, If you are really serious about understanding how the bracket simulation system works, go to NCAA Tournament: Predicting the Bracket, Track Your Team at the RPI for Division I Women's Soccer website, read that page, and then follow the instructions for using the 2017 Website Factor Workbook Preseason attached at the bottom of the page.
     
  2. Nice shot

    Nice shot New Member

    Jun 22, 2016
    Club:
    Chelsea FC
    CP, this is an incredible amount of work and really appreciated. Thanks.
     
    soccersubjectively repped this.
  3. Soccerhunter

    Soccerhunter Member+

    Sep 12, 2009
    Great work, cp! Since your simulations all depend on initial ranking (and H-A schedules) you will probably get a lot of questioning of initial rankings. As one who has tried to work within a defensible rational scheme in my class rankings, I trust your methodology. ...but there sure are some "wow!" ones on your list.

    ...like Duke, who, if they play anywhere near their talent potential could easily take it all.

    But stick to your guns.... I, and many others, (maybe even in the betting profession) will be very interested to see how close you come.

    Cheers!
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #4 cpthomas, Aug 12, 2017
    Last edited: Aug 12, 2017
    I have a number of ways to project Duke's pre-season rating, including (but not limited to):

    Trend (straight line, using the last 10 years' ratings): 0.6266

    Trend (straight line, using the last 8 years' ratings): 06266 (yes, the 10 and 8 year trends are the same, not typical)

    Average (using the last 10 years' ratings): 0.6159

    Average (using the last 8 years' ratings): 0.6173
    Based on a study of how these possible pre-season ratings compared to actual game results during the 2016 season, for all teams whose coaches have been head coaches as long as Duke's, the average of the last 8 years' ratings is the best predictor of what Duke's 2017 end-of-season rating will be, i.e., 0.6173.

    This approach does not take into account roster changes or anything else, rather only that the coach has been head coach for 9 or more years. In theory, an approach like this, coupled with detailed information about roster and other changes and factors, might produce the best ratings. But, so far I do not know of a system of detailed information that would make the "past average ratings" system better. And, if anyone wonders, yes, the systems I reviewed did include a system that looks at players who have graduated and players who are incoming.

    The bottom line is that, from a statistical perspective, so far as I have been able to determine, the coach is the primary contributor, to a great extent, to how a team will do (which takes into account recruiting skill, how much the coach can get out of players, practice coaching skill, game coaching skill, etc.). This may explain why BCS schools are willing to pay their head football and basketball coaches so much -- they believe that who the coach is, is what really matters.

    I would love to do a D1 women's soccer study of the rankings of schools' incoming frosh over a period of years, as compared to how those schools have done in the rankings. I suspect there will be some correlation, but that it won't be that great. There are some coaches who are great at recruiting, but not that great at coaching once they have the recruits; and there are coaches who aren't that great a recruiting, but are great at coaching the players they have. It would be really interesting to get a picture of where coaches fit when these two factors are at play.
     
  5. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Adding to my previous post about Duke:

    Over the last 10 years, regarding the NCAA Tournament:

    #1 seeds: no team with a rating below 0.6403 has gotten a #1 seed. So, if one projects Duke's rating for 2017 based on its 10 or 8 year trend or its 10 or 8 year average, it won't get a #1 seed. Which means that if Duke is to get a #1 seed, it will have to out perform what it's normally done during its coach's tenure so far.

    #2 seeds: no team with a rating below 0.6220 has gotten a #2 seed. If one projects Duke's rating for 2017 based on its 10 or 8 year trend, it will be a little over this threshold, so it conceivably will be a serious candidate for a #2 seed. But, if Duke's rating is better predicted based on its 10 or 8 year average, it won't meet this threshold.

    #3 seeds: no team with a rating below 0.6165 has received a #3 seed. If Duke's rating is better predicted based on its 10-year average, it barely will meet this threshold and might receive a #3 seed. On the other hand, if based on its 8-year average (which appears to be the best measure), it won't meet the threshold.

    #4 seeds: Duke will meet all the thresholds, which means it could be a candidate for at least a #4 seed.
    Bottom line is that if Duke is to get a College Cup level seed in the NCAA Tournament, it will have to do better than its rating trend indicates it is likely to do and also better than its average ratings over time have suggested it is likely to do.

    It will be interesting to see how it plays out. We'll have a good idea after the first day of the season and, if not then, by September 8 at the latest.
     
  6. Merlo Mighty

    Merlo Mighty Member

    Jun 30, 2014
    Chris: I'm a bit surprised you project our beloved Portland Pilots to end up ranked #57, a year after having its worst WCC conference finish in their program history. In the preseason coaches poll, they're picked to finish fifth behind Santa Clara, BYU, Pepperdine and Loyola Marymount. Yet, you predict Loyola Marymount, a team that's beaten Portland two years running, to end its season ranked #172. Was there an error in your calculations?
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    No, you have to read the first of the three posts, which explains how the system works. In particular, read the warning about not taking the initial simulations seriously. Over the course of the season, they'll become increasingly reliable and near the end of the season, quite reliable.

    For Portland, the assigned pre-season rating is 0.5946, which is its average rating over the last 8 years, since Smith has been head coach for 9 or more years. For Loyola Marymount, the assigned pre-season rating is 0.5336, which is its average rating over the last 6 years, since Myers has been the head coach for between 4 and 8 years. Everything builds from there.
     
  8. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    Yes, it's relatively insensitive to 1-2 year deviations from the mean.
    Last year was a down year, and can be tossed as bad data :D
    (Sustain those results for ~5 years, and they'll become the new mean.)

    It also happily ignores player turnover, assistant coach changes, conference changes, field reconstructions, actually winning last year's title :laugh:, and the light bulb going on over players' heads. Also, it omits 2017 data, because there isn't any yet :)

    But for a preseason projection, it's decent. More interesting is to watch how it converges to the actual RPI as the season progresses. By the end of the regular season + tournaments, it will be darn close to the final rankings that determine the tournament field.
     
  9. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #9 cpthomas, Aug 13, 2017
    Last edited: Aug 13, 2017
    Actually, by the end of the regular season and tournaments, it will be the final rankings, as I will have substituted all the actual game results for simulated results and computed the ratings using the NCAA's formula.
     
    Gilmoy repped this.
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'll add a little more about the background for assigning temporary pre-season ratings to teams based on coaching longevity.

    The study I did was based on the 2016 season. I tried a variety of systems for assigning pre-season ratings, to see how close the use of those ratings came to the actual final ratings for the 2016 season. What this means is, I used the assigned pre-season ratings as the basis for deciding the outcomes of all games; then computed Adjusted RPIs just as the NCAA formula does, with those simulated outcomes as my data base; and then computed the average differences between those simulated Adjusted RPIs and the ARPIs teams actually had for the 2016 season. I tested a bunch of different systems this way, and then looked to see which system had the smallest average difference between its ARPIs and the actual ARPIs. I selected, for use in this year's simulation, the system that had the smallest difference, which is the one I've described at the RPI and Bracketology blog based on average ARPIs over a number of years, with the number of years based on the current head coach's longevity at the school.

    One of the systems I tested is based on Chris Henderson's 2016 pre-season rankings of teams within their respective conferences. In doing his in-conference rankings, he considers a number of factors including coaching, players and scoring lost from the previous year, values of incoming players, etc. I looked at each conference and determined the average rating of the actual #1 teams from the conference over a prior period of years, the average rating of the #2 team, and so on from top to bottom through the entire conference. Then, using CH's in-conference ranks, I assigned ratings accordingly. This gave me pre-season ratings for all the D1 teams except the independents, and for them a picked what seemed to be an appropriate rating based on past history.

    When I compared this system, using the CH in-conference rankings as a base, to the coach-longevity-based system, the average difference between this system's ratings and the actual 2016 ratings was greater than the average difference for the coach-longevity-based system.

    I believe that CH has tweaked his formula for doing in-conference rankings this year, his system being a work in progress, so I'll be testing it again next year and am planning to run my tests covering the 2017 as well as 2016 seasons and possibly also may include the 2015 season. Nevertheless, part of what my study suggests so far, with the limited test I've conducted, is that players graduating, new players coming in, and so on may not be as significant a factor as one might think, but that rather the single most powerful factor when it comes to predicting how teams will do over the course of the season may be who the coach is.
     
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #11 cpthomas, Aug 23, 2017
    Last edited: Aug 23, 2017
    I now have substituted into my weekly simulations the actual results of all games through Monday, August 21 and have produced an updated simulation for the entire season that includes end-of-season ratings, ranks, and other data and a simulation for the #1, #2, #3, and #4 seeds, the Automatic Qualifiers, and the unseeded at large selections for the NCAA Tournament. If you're interested, go to the RPI and Bracketology for D1 Women's Soccer Blogspace and check out the two most recent posts titled: 2017 Simulated RPI Ranks 8.22.2017 and 2017 NCAA Tournament Bracket Simulation 8.22.2017.

    And, If you are really serious about understanding the details of how your team is doing or how the overall bracket simulation system works, go to NCAA Tournament: Predicting the Bracket, Track Your Team at the RPI for Division I Women's Soccer website, read that page, and then follow the instructions for using the 2017 Website Factor Workbook 8.22.2017, which is attached at the bottom of the page.
     
    olelaliga repped this.
  12. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    We loves numbers :)

    2017/08/18 Fri: WSU 0-0 Minnesota

    Holding us to a draw at our place boosts Minnesota by +10 :D

    Peering into the methodology like a pensieve, we note that the criterion is head coach's years of tenure, which determines a window size for average (final) ARPI. There are 3 tiers:

    1-3 years: window = 3 years
    4-8 years: window = 6 years
    9+ years: window = 8 years

    Note the round-up in the two shorter tiers. This means that a team whose HC has been there for 1-2 years, or 4-5 years, still gets a pro-rated contribution from the previous HC's ARPI(s). This is probably a net (small) negative, since the predominant reason for a HC change is lack of wins by the previous HC.

    Minnesota: HC Stefanie Golan started in 2012 = 5 full years, so UMn's window still includes 1 year of HC Mikki Denny Wright from 2011. Minnesota has been strong since 2016, but not over the full 6 year window. Expect their simulated RPI to zoom upward as 2017 data replaces the windowed-average ARPI.

    WSU: HC Todd Shulenberger started in 2015 = 2 full years, so WSU's 3-year window includes HC Steve Nugent's single year in 2014. That was a decent year for us; we hosted Seattle in R1. (WSU is pretty unique for having 4 HCs in 5 years and none of the changes being due to too many losses.) WSU's 3-year average ARPI is quite high, as we hosted R1 in two of them. Expect WSU's simulated RPI to ... do the unexpected ;)
     
  13. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #13 cpthomas, Aug 23, 2017
    Last edited: Aug 23, 2017
    Gilmoy, your post gets a little at a question that has interested me, which is how long a shadow a prior coach can cast. One prior coach of interest to me has been Clive Charles. How long did the "Clive effect" last after his death? I believe his effect lasted for a good number of years. Another one to look at is Amanda Cromwell at UCF. How long has her shadow lasted there? When will these teams "normalize." Those are two great coaches.

    One can ask a similar question about very poor coaches. How long are their shadows?

    The impression I have is that a truly great coach's impacts project much farther into the future than a very poor coach's impacts.

    There also is a related issue: The system uses average ARPIs over time, because that turned out to be the best representation of teams' future ratings that I could come up with. For example, it does better than using trended ratings. This suggests that, on average, looking how teams did last year or looking at their apparent direction based on how they've done the last few years, as examples, is not that good a representation of how they'll do next year -- looking at averages over time is better. But, it certainly appears to me that some teams are trending towards better or towards poorer ratings. The system, during the early part of the season, does not capture these trends. Once I shift from using the assigned pre-season ratings to using teams' actual current ARPIs, however, this should not be a problem.
     
  14. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    Right. I envision RPI as salmon swimming side-by-side in the river. Sometimes a big movement in points only keeps pace with those around you, and so your ordinal rank doesn't change much. Sometimes you gain by standing still, as fishies ahead of you drop back. You could be alone in the midst of a sparse part of the river, so a huge change doesn't even close the gap ... or neck-and-neck with dozens of fish, so that many tiny changes causes a massive re-sorting.

    And someday we'll actually make a cloud app that displays RPI this way :inlove: and you can click to get the dragonback-perspective, fully rendered in first-person 3D. I'd pay money to ride that all day :coffee:

    Visualization #2 is: we're all runners in a woods, and there's one bear behind us all :barefoot:
     
  15. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I have one more piece of info for this week, in terms of how much "credit" to give to the current simulation (some, but not much -- take it with a very big grain of salt):

    One test of a rating system is how well its game-site-adjusted ratings compare to actual game results.

    NCAA Current ARPI (based on 30,000+ games):

    Ratings are correct, match game result: 72.6%

    Ratings are incorrect, game is a tie: 10.7%

    Ratings are incorrect, opposite result: 16.7%
    NCAA Current Non-Conference ARPI (based on 30,000+ games):

    Ratings are correct: 68.4%

    Ratings are incorrect, tie: 10.7%

    Ratings are incorrect, opposite result: 20.9%
    Simulation Using Assigned Pre-Season Rating (based on the 288 games through 8/21):

    Ratings are correct: 63.9%

    Ratings are incorrect, tie: 8.3%

    Ratings are incorrect, opposite result: 27.8%
    When I consider that the ARPI and NCARPI are derived from the actual game results over the course of the entire season, the Simulation's performance is surprisingly (to me) good. The Simulation's results come pretty close to the NCARPI -- which is not so much an indication of the merit of the Simulation's assigned pre-season ratings as it is of the lack of merit of the NCARPI.
     
  16. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I now have substituted into my weekly simulations the actual results of all games through Sunday, August 27, and have produced an updated simulation for the entire season that includes end-of-season ratings, ranks, and other data and a simulation for the #1, #2, #3, and #4 seeds, the Automatic Qualifiers, and the unseeded at large selections for the NCAA Tournament. If you're interested, go to the RPI and Bracketology for D1 Women's Soccer Blogspace and check out the two most recent posts titled: 2017 Simulated RPI Ranks 8.28.2017 and 2017 NCAA Tournament Bracket Simulation 8.28.2017.

    And, If you are really serious about understanding the details of how your team is doing or how the overall bracket simulation system works, go to NCAA Tournament: Predicting the Bracket, Track Your Team at the RPI for Division I Women's Soccer website, read that page, and then follow the instructions for using the 2017 Website Factor Workbook 8.28.2017, which is attached at the bottom of the page.
     
  17. L'orange

    L'orange Member+

    Ajax
    Netherlands
    Jul 20, 2017
    Thank you!
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I now have substituted into my weekly simulations the actual results of all games through Sunday, September 3, and have produced an updated simulation for the entire season that includes end-of-season ratings, ranks, and other data and a simulation for the #1, #2, #3, and #4 seeds, the Automatic Qualifiers, and the unseeded at large selections for the NCAA Tournament. If you're interested, go to the RPI and Bracketology for D1 Women's Soccer Blogspace and check out the two most recent posts titled: 2017 Simulated RPI Ranks 9.4.2017 and 2017 NCAA Tournament Bracket Simulation 9.4.2017.

    And, If you are really serious about understanding the details of how your team is doing or how the overall bracket simulation system works, go to NCAA Tournament: Predicting the Bracket, Track Your Team at the RPI for Division I Women's Soccer website, read that page, and then follow the instructions for using the 2017 Website Factor Workbook 9.4.2017, which is attached at the bottom of the page.

    As I said in both of the blog posts, next week I anticipate seeing some significant changes in the simulated rankings and brackets. This will be due to my system changing from using assigned pre-season ratings (based on teams' average historic ARPIs) as the basis for simulating future game results to using teams' then-current actual ARPI ratings. Next week, I'll explain why I make that change at this point in time.
     
  19. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I now have substituted into my weekly simulations the actual results of all games through Sunday, September 10, and have produced an updated simulation for the entire season that includes end-of-season ratings, ranks, and other data and a simulation for the #1, #2, #3, and #4 seeds, the Automatic Qualifiers, and the unseeded at large selections for the NCAA Tournament. If you're interested, go to the RPI and Bracketology for D1 Women's Soccer Blogspace and check out the two most recent posts titled: 2017 Simulated RPI Ranks 9.12.2017 and 2017 NCAA Tournament Bracket Simulation 9.12.2017. Please be sure to read the introductory comments to the Ranks post.

    And, If you are really serious about understanding the details of how your team is doing or how the overall bracket simulation system works, go to NCAA Tournament: Predicting the Bracket, Track Your Team at the RPI for Division I Women's Soccer website, read that page, and then follow the instructions for using the 2017 Website Factor Workbook 9.11.2017, which is attached at the bottom of the page.

    This week -- following the completion of the week 4 games -- is when I make a significant change in the system for simulating results. The change is from using assigned pre-season ratings as the basis for simulating future game results to using teams' current actual ARPI ratings as the basis for simulating the results. I make this change now because now is the time at which the actual ARPI ratings begin to match better than the assigned pre-season ratings with actual game results. As a matter of interest, this week's actual ARPI ratings match the actual results of games already played 77.2% of the time; games they would have simulated as win-loss games but that actually were ties comprise 8.4% of all games played; and games they would have simulated as ties but that actually were win-loss games comprise 2.9% of all games played. Thus the current actual ARPI ratings match or are within a "half" of the actual results to date 88.5% of the time. (I expect that these numbers will diminish some over the balance of the season, with ratings actually matching results 72 to 73% of the time at the end of the season. That is the range of what the actual end-of-season ARPI ratings have done over the last 10 years.)
     
    Gilmoy repped this.
  20. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    Yale nobly took the hit (vs. Stanford) for their conference, while Princeton is ... projected to win (repeat?) the Ivy? By the same reasoning, I guess even mighty UCLA is projected to not win the Pac-12 (because Stanford is above them). But does Stanford really have enough Elements 2+3 at this stage to overcome UCLA's +0.143 in Element 1?

    ... wha?
     
  21. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Good questions/notes, Gilmoy. Tomorrow, I'll write a little about UCLA/Stanford and Southern Cal (USC to those of us out here on the West Coast).

    In the meantime, I'm not sure what your notations +6 = 0-1 and +7 = 0-0, next to Stanford and UCLA respectively, mean.
     
  22. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    +6=0-1 = 6 wins, 0 draws, 1 loss. Like writing 6-1-0, except unambiguous.
     
    cpthomas repped this.
  23. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    In response to Gilmoy's next to previous post wondering about Stanford and UCLA:

    Stanford's three RPI elements:

    Element 1 (Winning Percentage): 0.8571
    Element 2 (Average of Opponents' Winning Percentages Against Other Teams): 0.8167
    Element 3 (Average of Opponents' Opponents' Winning Percentages): 0.5879
    UCLA's three RPI elements:

    Element 1: 1.0000
    Element 2: 0.4966
    Element 3: 0.6226
    Or, to put it in simpler terms, as a way to eyeball why Stanford could have a poorer winning percentage but be well ahead of UCLA in the rankings, Stanford's opponents' combined records (not counting their results against Stanford) are +35 =2 -7. UCLA's are +18 =6 -17. (For Element 2, the formula doesn't simply use these numbers. Rather, it computes each opponent's winning percentage against other teams and then takes the average of these winning percentages.)

    Looking at these numbers, what they say is that UCLA's winning percentage so far doesn't mean a lot because its opponents have been pretty mundane. On the other hand, Stanford's winning percentage means quite a bit because its opponents have been excellent.

    As discussed at the RPI Blog, this week the effective weights of the three elements, for RPI purposes, are 37% Element 1, 50% Element 2, and 13% Element 3. In other words, the RPI currently gives too much effective weight to strength of schedule. So, the difference in the rankings this week between Stanford and UCLA probably partly is accounted for by that. Nevertheless, it seems pretty clear that Stanford's schedule has been much stronger than UCLA's.
     
    Gilmoy repped this.
  24. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    And, in response to Gilmoy's next to previous post about Southern California (aka USC):

    Their current RPI elements are:

    Element 1: 0.8000
    Element 2: 0.4286
    Element 3: 0.6057
    (These include some slight changes from their numbers based on games through 9/10, as since then I've added the 9/11 games.)

    From the simpler view of their opponents' combined records against other teams, they are +13 =4 -18.

    When the system simulates the results of future games, it compares the two opponents' current ARPIs, makes an adjustment for home field advantage, and then simulates the result as a win, loss, or tie (it simulates a tie if the two opponents' ratings are within 0.0052 of each other). With that taken into account, and matching up Southern Cal with each of its remaining opponents given their and their opponents' current ARPIs, the simulation has Southern Cal ending the season with a record of +7 =0 -11! (Current actual record of +4 =0 -1, combined with Ws v San Diego, Oregon State, and Oregon and Ls v Loyola Marymount, Utah, Arizona, Arizona State, Colorado, Washington, Washington State, California, Stanford, and UCLA.)

    Of course, that's just what the current numbers say, and they're not derived from close to a sufficient sized data base for making anything close to good conclusions. What the then current numbers say at the end of the season is all that really matters. One thing the current numbers are good for, however, is helping expose that our convictions about how good teams are can tend to be not based on what's happened so far this season.
     
    Gilmoy repped this.
  25. Gilmoy

    Gilmoy Member+

    Jun 14, 2005
    Pullman, Washington
    Nat'l Team:
    United States
    OK, so briefly: this early in the season, SOS (Element 2) really is the elephant in the canoe. And schedules are set so far in advance that there's no way to anticipate how well your opponents actually do. Some years they all accumulate early losses, and this happens.

    For example, Santa Clara going +0=1-4 in five Pac-12/B1G road matches was not expected, and gives them a low Element 1, and contributes to low Element 2s for each of those five opponents. This may improve as Santa Clara goes through the WCC schedule. It's the old RPI enemy-of-my-enemy game: every team is all of its opponents' biggest fans, and wants them to win all of their matches except the ones against itself.
     

Share This Page