Use of the RPI for Division I Women's Soccer

Discussion in 'Women's College' started by cpthomas, Jan 25, 2008.

  1. Craig P

    Craig P BigSoccer Supporter

    Mar 26, 1999
    Eastern MA
    Nat'l Team:
    United States
    Is that actually a design intent of RPI? I've always taken it to be an undesirable artifact of the way that the formula is constructed---in the abstract, there should NEVER be a penalty for winning a game, because there is ALWAYS a chance, however miniscule, that the game could have been lost or tied, and even against the weakest of foes a slight amount of information is gained from a win.

    Note that in college hockey, a modification has been made to the RPI to eliminate the RPI penalty to beating a weak team in the conference tournament (offhand, they might have made that apply universally, but I know it applies to conference tournament games). If such a game is found, then it is eliminated from the calculation of the RPI of the affected team (though not, I don't think, from the RPI of the losing team, although it doesn't really matter because a team that bad will not be in consideration for tournament selection).
     
  2. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    You sure about that?:D
     
  3. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Just a quick note: Great job bmoline in finding the new RPI published by the NCAA. I wouldn't even have looked for it for a couple of weeks, since it's an unannounced publication. Now, I get to go through the data discrepancy process.

    I'm wondering why the NCAA chose to publish this now. Anybody know or have an idea?
     
  4. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    I'm guessing there's a new guy nobody told not to publish it.:p
     
  5. bmoline

    bmoline Member

    Aug 24, 2008
    Champaign
    Club:
    Chicago Red Stars
    Nat'l Team:
    United States
    I was really surprised to find it. I was writing a blog post about polls/RPI, so I went to check the official RPI report to see how far Illinois had dropped. In fact, I'd already linked to your PDF of the top 50+ teams when I found it. Totally a fluke that I happened to check.

    Great job again, cpthomas. It's pretty amazing that you're able to approximate this so closely.
     
  6. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Might as well take a moment to point out some of the absurdities created by the RPI in terms of strength of schedule. Last season I made an example of Duke and Western Illinois and how a win over Western Illinois benefited your RPI more (quite a bit more) than getting a win over Duke. With cpthomas' breakdown of individual schools' win-loss percentage (Element 1 of the RPI) and their opponents' averaged win-loss percentage (Element 2 of the RPI), we can see a number of absurdities like this in the current season. Just for example,

    A win over Old Dominion helps your RPI more than a win over USC.

    A win over Western Kentucky helps your RPI more than a win over Duke, Colorado, or Texas A&M

    A win over Siena helps your RPI more than a win over Texas.

    A win over Long Island helps your RPI more than a win over West Virginia or Missouri.

    ****
    If you care to see how the RPI accomplishes these miracles, you can follow along for a moment:

    Let's say Team A played one game, a win against Western Kentucky:

    Your RPI would break down this way -

    Element 1: Team A's win pct : 1.000 (only result: the one win against Western Ky)
    Element 2: Opponent's avg win pct: 0.9000 (because Western Ky is the only opponent, this is same as West Ky's win pct --or West Ky's Element 1--, which would exclude this loss to Team A)
    Element 3: Opponents' opponents' win pct: 0.4021 (which is West Ky's Element 2 because again West Ky is the only opponent so far, so only West Ky's opponents count)

    Team A RPI = 25% Element 1 (1.0000) + 50% Element 2 (0.9000) + 25% Element 3 (0.4021) = .8005

    If Team B played one game, beating Texas A&M, we could break down their RPI the same way.
    Element 1: 1.0000 (the one win against A&M)
    Element 2: 0.7813 (A&M's Element 1, their winning pct not including this new loss to Team B)
    Element 3: 0.6089 (A&M's Element 2, their opponents' avg win pct)

    Team B RPI = 25% Element 1 (1.0000) + 50% Element 2 (0.7813) + 25% Element 3 (0.6089) = .7929

    So Team A gets a higher RPI for beating Western Kentucky than Team B does for beating Texas A&M.

    Neat, huh? Well, the RPI is rife with these absurdities.

    By taking 50% of any team's Element 1 (its win pct) and adding 25% of ther Element 2 (their opponents' avg win pct), you can compare what the RPI effect will be in playing them versus any other team. (Naturally, to compare you're assuming you get the same result against both opponents, whether it's a win against both, a tie against both, or a loss).

    (As an aside, I usually apply the RPI's strength-of-schedule formula in making these comparisons, which is taking 2/3 of Element 1 + 1/3 of Element 2. It's just a different way of expressing the results but the relative values between the teams will be exactly the same. Notice in either case, you're taking two parts of Element 1 to one part of Element 2)
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'll answer my own question. According to NCAA staff, the Division 1 Women's Soccer Committee asked that the RPI be published three times during the season. The Championship Manual says it will be published twice, but apparently the Committee wanted it done three times. As far as I'm concerned, the more the better.
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Someone asked me if they were correct in assuming that we should expect teams from conferences with end-of-season tournaments to do better in the RPI as the end of the season approaches, as a result of those tournaments. This is a question I've wondered about since, at first glance, it might seem likely at least the top teams with conference tournaments would do better since as they advance they generally are playing teams with better and better records, so they get to beef up their strengths of schedule. On the other hand, by playing teams with better and better records, they also are increasing their chances of losing or tieing. In addition, for every conference game, all the other conference teams add a win/loss pair or a tie/tie pair to their strengths of schedule, which pulls those conference teams' strengths of schedule towards 0.5000. So, conference tournament games tend to pull all conference teams' strengths of schedule towards 0.5000. In other words, the answer to the question isn't obvious.

    To get a sense for an answer to the question, I decided to take last year's data and compare RPIs as of (1) games played through the next-to-last weekend of the regular season and (2) games played through the last weekend of the regular season. Although there are a few tournament games played during the next-to-last week of the season (Big East), the vast majority of them are played during the last week.

    I looked at the ACC, Big 10, Big 12, Big East, Colonial, and SEC as examples of tournament conferences. I used the Ivy, Pac 10, and WCC as non-tournament conferences.

    To start, I looked at averages by conference. The average change in Adjusted RPI through the next to last week to through the last week went in the following order, from the conference doing the "best" to the conference doing the "worst." You may note that all conferences' average RPIs went down, and the difference from the top to the bottom is very small:

    Big East - .0020
    Colonial
    WCC
    Big 10
    Big 12
    Pac 10
    SEC
    Ivy
    ACC -.0048

    Looking at RPI Element 2, average of opponents' winning percentages against other teams, the changes were as follows:

    Colonial -.0014
    Big East
    Big 12
    WCC
    Big 10
    Pac 10
    SEC
    ACC
    Ivy -0.0056

    Looking at RPI Element 3, opponents' opponents' average winning percentage, the average changes were as follows:

    Big 10 .0008
    Colonial
    Big 10
    Pac 10
    ACC
    Ivy
    Big East
    SEC
    WCC -.0022

    I then did what I could to look at the distribution of teams in each of these areas, to see if there was a distribution pattern that might indicate a difference between tournament and no-tournament conferences. The 6 tournament conferences I looked at have 73 teams and the 3 no-tournament conferences have 26, so 74% of the teams are in tournament conferences and 26 percent are not. I broke the above areas into groups of 10 teams to see how the tournament/no-tournament teams were dispersed through the ranks of all 99 teams. Here are the results:

    Change in Adjusted RPIs of teams: range of +.0200 (best team) to -.0218 (worst team)

    Top 10 # of Tourney Teams 9/# of No Tourney teams 1
    11-20 5/5
    21/30 4/6
    31/40 9/1
    41/50 8/2
    51/60 7/3
    61/70 9/1
    71-80 9/1
    81/90 6/4
    91-99 7/2

    If you look at these in terms of groups of 20 teams, the percent of no-tournament teams in each group of 20, as compared to their 26% representation in the total group, was 30%, 35%, 25%, 10%, 32%. In other words, it doesn't look like there's a pattern that favors either group of teams.

    Turning to RPI Element 2, here are the results: range of +.0275 to -.0250

    1-10 6/4
    11-20 6/4
    21-30 8/2
    31-40 8/2
    41-50 9/1
    51-60 10/0
    61-70 9/1
    71-80 8/1
    81-90 7/3
    90-99 1/8

    Looking at groups of 20, the dispersal of no-tournament teams is 40%, 20%, 5%, 10%, 58%.

    Turning to RPI Element 3, I got the following results: range of +.0060 to -.0073.

    1-10 6/4
    11-20 8/2
    21-30 8/2
    31-40 7/3
    41-50 8/2
    51-60 10/0
    61-70 9/1
    71-80 9/1
    81-90 4/6
    91-99 4/5

    Considering groups of 20, the percentages of no-tournament teams are 30%, 25%, 10%, 50%, 58%.

    Although the distributions of no-tournament teams in the last two sets of numbers look a little odd, I don't see any particular pattern in any of these numbers. So, at least based on this one test, there does not appear to be a particular advantage that the RPI gives to either tournament or no-tournament conferences or teams.

    Any thoughts from others?
     
  9. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Hey! You have us hooked. We're waiting. Not so patiently...

    :)

    (Better use the smiley. Cliveworshiper's taught me you can't even count on the old-school guys anymore to understand without a friggin' smiley...!)
     
  10. bmoline

    bmoline Member

    Aug 24, 2008
    Champaign
    Club:
    Chicago Red Stars
    Nat'l Team:
    United States
    cpthomas, wondering how close you were able to come last week to matching up your numbers with the official RPI release. Were there data inconsistencies, or was it the continuing battle to match the reward/penalty points?

    Mostly, though, I'm like kolabear and just waiting for this week's numbers! ;-)
     
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    First things first. Here are two pdf files with the RPI report through the October 26 games. (There are a couple of "low end" SWAC games missing due to the schools' not yet having reported the scores anywhere, including on their own websites.) Report 1 covers teams ranked 1-58; and Report 2 covers teams ranked 59-119.

    I've added a new column this week, which is the Unadj Rank column, which shows the ranks of teams prior to the bonus/penalty adjustments. This is so you can get an idea of the extent of changes resulting from the adjustments. It's also so that later this week, I can post some thoughts I've had about the bonus/penalty adjustment process.

    In regard to bmoline's question, I did have a couple of minor data discrepancies related to home-neutral-away game sites, and a couple of erroneous game scores. In addition, I had two games from SWAC that the NCAA did not yet have official scores from. After correcting my errors and deleting the two SWAC games so that my database hopefully matched the NCAA's, my results still didn't match up with the NCAA's exactly, although they were better. Since then, I've done a little more bonus/penalty formula tweaking and my results (through 10/19) are even better although still not exactly matching the NCAA's. For the top 50 teams, I'm off in three cases, the first case involving 2 teams and the other two cases involving three teams. So, the results are pretty good, but I still have work to do.

    Have fun with this!
     
  12. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Oh, I see the brewing controversy (especially now that the Bruins have jumped to #3 above Portland) -- the Committee has to do something to bump up Notre Dame (unbeaten, untied) to one of the four #1 seeds. That means the #4 team gets dropped to #5 and out of one of the #1 spots.

    Which means Portland at this point.

    Wonder if they've figured that out over on PilotNation yet... Ouch...
     
  13. UFGator98

    UFGator98 Member

    Aug 13, 2001
    Florida
    Do you think when seeding these teams, the committee will take into account players who aren't going to be there, because of U-20 duty? A lot of these top teams are losing players who are not redshirting. That has to be a factor, right?
     
  14. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Nah, there's no way the committee can do that, can they?

    By "taking into account" players who won't be there, we're saying that we're going to seed them lower and expect them to be worse off, right? (a fair assumption since Stephon Marbury doesn't play for any of these teams.)

    But I can't see how the Committee could, in effect, tell a team, "Well you earned the right to host the 1st two rounds but we're going to seed you lower since your defensive-mid is going to be in Santiago with the U-20s."
     
  15. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    Absolutely not. The committee can only consider the published selection criteria. Thankfully. Roster make up, injuries, national team call ups, whatever are NOT selelction criteria and as such can't be part of the discussion (and aren't - sadly most of the committee won't have any idea about who's playing for the U20's and how that will impact the tournament).
     
  16. JuegoBonito

    JuegoBonito New Member

    Jan 15, 2008
    Oh this RPI report is very cool CPThomas. It has UW ahead of USC and Cal!!! I love it. So if I get this system, portland could steadily drop in RPI ranking (though winning) because they are now playing conference teams and beating opponents that are not highly ranked and do not have strong win records? Is that the case? While pac10 teams probably get a bigger boost beating most of their conference teams? Is this a problem for Portland when it comes to seeding?
     
  17. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    It's only an advantage for the Pac10 team that wins out, I'd guess.

    And 1 of the two remaining games is USD, which has a pretty good RPI.
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Great post for getting a good discussion going!

    You have part of how the RPI works correct, which is that if two teams with starting equal RPIs keep winning and one is in a conference with higher average RPIs, then over all the conference games, the team in the conference with the higher average RPIs will have a gravitational pull towards a higher RPI. However, you also have to look at the two teams' non-conference opponents and how they are doing, since their wins and losses keep getting added into the two teams' RPI strength-of-schedule elements. If the non-conference opponents of the team from the strong conference are doing less well than the opponents of the team from the less strong conference, then those non-conference games' contributions to the two teams' RPIs may balance out the conference games. (Because of this, the selection of non-conference opponents in the scheduling process is very important, especially for teams from potentially weaker conferences.) The other thing you have to look at, at this point, is which conference teams the two teams have yet to play. In this respect, there is a real showdown coming next weekend among the four California teams, which will help the strengths of schedule of all of them. On the other hand, some are going to be winners, some are going to be losers, and maybe some are going to be tiers. So, although their Elements 2 may improve, some of their Elements 1 will suffer. Portland next weekend plays Santa Clara, which unfortunately, if Portland wins (which is not a foregone conclusion), will not help its strength of schedule. On the other hand, Portland closes the following weekend with San Diego, which so far has an excellent record that, if Portland wins (which again is not a foregone conclusion), will significantly help its strength of schedule. Plus, the Pac 10 is a bigger conference than the WCC, and every game in a conference pulls all the other teams' strengths of schedule towards .5000 (because every conference game results in a win and a loss or two ties for all the uninvolved conference teams), so that works slightly against the Pac 10 teams.

    A better team to look at in this regard, in my opinion, is Notre Dame. They keep winning, but they now have descended to #6. This is because the Big East is relatively weak, as strong conferences go. The Big East tournament starts this week. Notre Dame will play the winner of Cincinnati v St Johns; assuming ND wins, it then will play the winner of Marquette v Rutgers; and if it wins again, it will play the finals. The first game, and possibly the second, won't help and may hurt ND's RPI. In addition, if ND wins the tournament, there will be 6 games it doesn't participate in that will pull its strength of schedule towards .5000. It's possible, I believe, that ND could descend even more in the RPI ranks. The question then will be the extent to which head-to-head results and results against common opponents are sufficient to propel ND to a higher seed than its RPI indicates.

    My own sense of all this is that once the regular season is over there's going to be a mish mash of teams at the top. If the non-California top teams of the moment win out (which I say again is not a foregone conclusion), then how the seeding of the top teams works out may depend largely on the outcomes of the LA games this weekend, since there is a possibility that the Women's Soccer Committee may be able to link all of Notre Dame, North Carolina, UCLA, USC, Stanford, and Portland through either head-to-head results or results against common opponents once those games are completed. The Committee then would have to balance those factors against the RPI to decide on seeding at the top level. The one other game that could play into this is North Carolina v Florida State, both this week and in the ACC tournament if they play again in the ACC tournament. This game could be a factor if Florida State wins a game over NC. (I think USC, at present, is not in line to get close to a #1 seed. However, if they were to beat Stanford ...? That would make things very interesting.) I think this is going to be a very tricky year for the Women's Soccer Committee.

    I apologize if this isn't coherent. It's late and my brain is feeling addled.
     
  19. khsoccergeek

    khsoccergeek New Member

    Jan 10, 2002
    West Virginia
    Plus Notre Dame and West Virginia, the two division champions, didn't meet in the regular season for the second consecutive year. Big hazard of the size of the conference. So what should have been a natural RPI boost for both teams just isn't there.
     
  20. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'm going to take a crack at describing how the RPI's difficulty rating teams from the different regions in a single rating system is made worse by the bonus/penalty system. I'll also include a couple of other issues with the bonus/penalty system.

    First, some more background on the problem rating regions in a single system.

    I've run again a set of average RPIs by region, based only on inter-regional games, covering games through October 26. Those numbers are as follows:

    Central .4782
    Great Lakes .4826
    Mid Atlantic .4798
    Northeast .4775
    Southeast .4844
    West .5387

    This week, however, I also ran a set of average RPIs by region, based only on intra-regional games. Those numbers are as follows:

    Central .4992
    Great Lakes .5005
    Mid Atlantic .5041
    Northeast .5072
    Southeast .5015
    West .5006

    These last numbers are what one would expect. Each region, using only intra-regional games, is completely separate from each other region. When computing RPIs for completely separate pools of teams, one would expect the average RPI for each pool to be roughly .5000 (with slight variations, I believe, because the different teams play different numbers of games). Assuming each pool has a typical bell curve distribution of teams' RPIs, the RPI distributions also should be the same from one pool to the next. Therefore, if you then bunch all the regions' RPIs together, in any ranking group you choose, you will get the number of teams from each region that is proportional to the number of teams in the region. So, if you are looking at the top 50 teams (which is a good number for tournament at large selection purposes), you would expect to see 18% from Central, 19% from Great Lakes, 15% from Mid Atlantic, 13% from Northeast, 19% from Southeast, and 16% from West, with the percentages based on the percent of teams from that region out of all 318 teams.

    So far this season, teams have played 2684 games. Of those, 1726 have been intra-regional and 958 have been inter-regional. In other words, 64% have been intra-regional and 36% have been inter-regional. As a matter of interest, breaking these percentages down by region, the numbers of intra- and inter-regional games by region are:

    Central 65/35
    Great Lakes 70/30
    Mid Atlantic 50/50
    Northeast 57/43
    Southeast 63/37
    West 78/22

    (Geographic issues presumably are the reasons for the differences.) What these numbers appear to verify is that the regions (with the except of the uniquely located Mid Atlantic region) in fact do predominantly play within regional playing pools.

    Regarding the intra-regional average RPIs, since the pools of teams are completely separate, the average RPIs really provide no basis whatsoever for making cross-regional comparisons of teams. So far as a neutral reader of the RPIs is concerned, the regional pools could be equal or they could be unequal. For example, one pool could be U12s, another U14s, another U15s, etc. Each pool would produce the same cross-section of RPIs.

    What this means is that the only basis for comparing the pools to each other is the inter-regional average RPIs. If one is really going to go down the RPI road, the implication is that one first should identify the playing pools' strengths through the inter-regional RPIs. One then should adjust the intra-regional RPIs of teams so that the intra-regional average RPIs of teams from a region are reflective of that region's inter-regional average RPIs. And only then should one combine all into a single system. This, of course, would represent a radical change in the RPI. But the theory, at least to me, seems correct.

    The RPI, however, does not do this. Instead, it treats the intra-regional average RPIs as being comparable across regions independently of the inter-regional RPIs. Put differently, although the inter-regional average RPIs suggest that the regions are not of equal strength, the roughly 2/3 of all game data represented by intra-regional games state that the regions are of equal strength.

    It's based on this analysis, as well as an experiment I ran with last year's data after the season was over, that I have concluded that the RPI discriminates against teams from strong regions.

    Now for how this relates to the bonus/penalty awards. I'll discuss the bonuses, since they are most pertinent to at large selection and seeding decisions. The same reasoning applies to the penalties. There are two sets of bonuses. One set applies to wins and ties in games against teams ranked 1-40 in the unadjusted RPI. The second, lower, set applies to wins and ties in games against teams 41-80. If I am correct that the RPI discriminates against teams from strong regions, then a strong region will have teams that should be in the top 1-40 and 41-80 that are not there; and conversely weak regions will have teams there that should not be there. Since teams play predominantly within their regions, this means that teams from strong regions have a lesser chance to achieve bonus points than they should have; and teams from weak regions have a greater chance than they should have. Thus not only does the unadjusted RPI discriminate, but the bonus/penalty system increases the discrimination.

    Just a couple of other notes about the bonus system and I'm done. One of the things I follow is which teams are hovering around unadjusted RPI positions 40 and 80. This is because a win/tie over a #40 teams gives a maximum bonus, whereas a win/tie over a #41 team gives a lesser bonus; and the difference between #80 and #81 is between a bonus and no bonus. I've calculated, based on last year's data and the RPI's accuracy in predicting results of the NCAA tournament, that the RPI's standard error is in the range of .0200 to .0300. In fact, last year, where the RPI differences between two opponents were less than .0600, the RPI was correct in predicting tournament game outcomes only 58.6% of the time. For the bonus awards, the teams around #41 and #80 are much, much closer than that together. So, from a statistical perspective, the bonus/penalty system seems very hard to justify.

    Of course, the Women's Soccer Committee doesn't pay attention to the RPI's lack of precision, except to the extent that head-to-head results and results against common opponents would justify a deviation from the RPI. Perhaps in that context, the idea of rewards for wins/ties against top teams makes some sense.

    Finally, the Committee has a rule that if it can't make a "bubble" decision after looking at teams' RPIs, head-to-head results, and results against common opponents, then it will look at the teams' last eight games and at the teams' results against other teams already selected for participation in the tournament. In looking at other teams already selected, however, the Committee must disregard results against automatic qualifiers with RPIs below 75. Since the RPI awards bonuses for wins/ties against teams with adjusted RPIs of 80 and lower, it seems odd for the Committee to have have used the 75th position as the one for excluding consideration of results against automatic qualifiers.

    Just as a reminder, I have not suggested that the NCAA should completely revamp the RPI to eliminate the discrimination against teams from strong regions (although I might change my mind in the future and suggest it). All I have suggested is that when teams are in bubble situations and the decisions are not clear, it should give preference to teams from stronger regions, taking into consideration the RPI's tendency to discriminate against those teams.
     
  21. JuegoBonito

    JuegoBonito New Member

    Jan 15, 2008
    No, this makes sense. I guess I hadn't followed along the RPI discussion enough to realize that your opponent's records KEEP contributing to your RPI throughout the season. I thought it was only at the time you played them. Its kind of weird since you might play a team at the height of their success and win, they lose 4 players (to injury and/or YNT) and start losing and you get penalized for it (though they were highly competitive when you first played them). I can see lots and lots of flaws in this RPI. And you have managed to point many of them out, in particular how inter-regional play against relatively weak teams can inflate RPI ranking. But... I do like, for example, that it distinguishes between WSU and UW and ranks UW higher because UW has had a tougher schedule. Now given that, isn't it odd that NSCAA rankings ignore RPI? Also, Socceramerica rankings? Both, for example, have WSU above UW. I'll use these two as an example because I am most aware of their performances. Why is that? Aren't the coaches voting for these rankings aware of strength of schedule issues? RPI? Albyn Jones? In all seriousness, what powers these rankings, do you think? Familiarity with the teams being ranked? Biases against given teams? History? Indifference? Do these voters/coaches really consider these issues? Any insight?
     
  22. Craig P

    Craig P BigSoccer Supporter

    Mar 26, 1999
    Eastern MA
    Nat'l Team:
    United States
    Well, it's pretty difficult to accurately take a snapshot of how good a team was at the time you played them. Sure, form ebbs and flows during a season over a number of games, to where you might be able to pick out trends by looking at a series of results, but there are also game-to-game fluctuations that are impossible to capture (injuries, suspensions, somebody just having a bad game). It would be pretty ridiculous to try to capture that by only taking a snapshot of the record at the time the game is played for RPI purposes. I'm not aware of ANY ranking system that does anything like that.
     
  23. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
  24. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    A couple of "stray" comments related to previous posts. But first, something I just learned is that Division 1 Men's Ice Hockey, apparently this past summer, decided to eliminate the bonus system. Maybe Division 1 Women's Soccer will follow?

    The following is from CraigP in response to a post about how winning a game against a poor team actually can decrease a team's RPI.

    I've read the Division 1 Men's Hockey rule on this at some point in the past, but can't find it. I think the rule is that a team gets to exclude one conference tournament game if it won the game but nevertheless suffered a decrease in its RPI as a result of the game. The theory, presumably, is that it wasn't a game the team "chose" to play so the team shouldn't be punished because of it. However, if the logic holds for that, then what if a team plays a conference tournament game it loses but its RPI increases as a result of the game. Shouldn't that game also be excluded? What if the beneficial win helps more than the hurtful loss hurts? And, why not exclude all such games -- both beneficial wins and hurtful losses?

    It seems to me that the real question should be: Which system, based on calculations at the completion of the regular season including conference tournament games, is the best predictor of NCAA Tournament results? The sole purpose for the RPI is to help pick the best 34 teams for at large positions and to help seed the teams. The only way to measure the accuracy of the RPI at picking the best teams and ranking them is to compare its rankings to Tournament results. I think this applies to the issue CraigP raised, as well as to some of the geographic issues I have raised myself. (It also applies to which statistical system the NCAA should use - the RPI or some other system.) So, my question to Ice Hockey would be, has the change they made turned the RPI into a better predictor of Hockey Tournament results?

    This also from CraigP in response to a concern about adding future games of a team that "my" team has played into my team's Strength of Schedule calculations, given that that other team may have suffered injuries, had other problems, etc., since that game so that really it today is a different team than it was when we played them.

    The Women's Soccer Committee, according to its rules, does not look at these things. As discussed previously, it looks at the RPI (and its constituent parts), head to head-results, and results against common opponents. If those are inconclusive, it looks at results over the last eight games and results against teams already selected to participate in the tournament other than conference champions with RPI ranks over #75. That's it, nothing else.

    Regarding the particular question of changes in an opponent's performance over the course of the season, I completely agree with CraigP's comment.

    But, look at this from the NCAA's Principles and Procedures for Establishing the Men's Bracket for Division 1 basketball:

    "The RPI is intended to be used as one of many resources available to the committee in the selection, seeding and bracketing process. It never should be considered anything but an additional evaluation tool. Computer models cannot accurately evaluate qualitative factors such as games missed by key players or coaches, travel difficulties, a team's performance in the last twelve games, the emotional effects of specific games, etc.

    "....

    "Each committee member independently evaluates a vast pool of information available during the process to develop individual preferences. It is these qualitative, quantitative and subjective opinions -- developed after many hours of personal observations, discussion with coaches, directors of athletics and commissioners, and review and comparison of objective data -- that dictate how each individual ultimately will vote on all issues related to the selection, seeding and bracketing process."

    Also: "Among the resources available to the committee are compete box scores, game summaries and notes, various computer rankings, head-to-head results, chronological results, Division 1 results, non-conference results, home and away results, results in the last twelve games, rankings, polls, and the NABC regional advisory committee rankings."

    In other words, the big money, much more scrutinized basketball selection process is very different than the money-losing, little scrutinized other sport selection processes, including that for women's soccer. There may be legitimate economic reasons for this. However, the difference is interesting.
     
  25. Craig P

    Craig P BigSoccer Supporter

    Mar 26, 1999
    Eastern MA
    Nat'l Team:
    United States
    As a counterpoint to what they do with basketball, consider the case of the Notre Dame men's ice hockey team last year. In the final game of their opening playoff series, they lost their leading scorer to a knee injury. The Irish subsequently lost both the semifinals and the 3rd place game, and were seeded as the last at-large team into the NCAA tournament by a strictly numeric selection process (popularly known as the Pairwise ranking). If significant injuries were taken into account, the Irish might very well have been left out of the tournament.

    The rest is history: ND went on a tear, beating UNH and Michigan State to become the lowest seed ever to advance to the Frozen Four, then defeated the top overall seed Michigan in the semifinals before finally succumbing to BC despite a game effort in the championship game.

    So I think that a great deal of caution is in order in evaluating subjective factors that might lead to a team being discounted for selection.
     

Share This Page