2012 rpi

Discussion in 'Women's College' started by cpthomas, Jan 9, 2012.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I ordinarily wouldn't start this thread until much later, but at its upcoming meeting later this month, the Division 1 Women's Soccer Committee is going to be reviewing the RPI. That being the case, here are some of the questions I'm guessing the Committee will consider:

    1. Game Location. Presently, game locations show up in the RPI only in the adjustment process for good wins/ties and poor ties/losses. In some other sports (including men's basketball), the "winning percentage" portion of the RPI formula weights games based on whether they are home, away, or at a neutral site. A home win counts as less than 1 win, a home loss counts as more than 1 loss, an away win counts as more than 1 win, and an away loss counts as less than 1 loss. A neutral site win or loss counts as 1 win or 1 loss. The Committee could consider making this kind of change.

    2. Bonus and Penalty Adjustments. Once the NCAA computes the "normal" RPI, for some sports it makes adjustments for good wins/ties and poor ties/losses. The adjustment structure and amounts can vary from sport to sport. For Division I women's soccer, the structure has been to award, in descending order of amount, bonuses for wins against teams ranked 1 to 40 in the "normal" RPI rankings, for wins against teams ranked 41 to 80, for ties against teams ranked 1 to 40, and for ties against teams ranked 41 to 80. Within each of those groupings, the amount of the bonus depends on whether the win or tie was away, at a neutral site, or at home. Conversely, the structure has imposed mirroring penalties for poor ties and losses, with the ranking groups being the teams ranked 135 to 205, and teams ranked 206 and poorer. (The penalties for ties and losses to teams ranked 206 and poorer also apply for ties and losses to non-Division 1 teams.) The Committee could consider changing the bonus and penalty structure and/or amounts.

    Also, in sports in which the NCAA has moved to weighting the "winning percentage" element of the RPI based on game location, the tendency has been to eliminate the bonus and penalty adjustments altogether. The Committee thus could consider doing this.

    3. Region and Conference Problem. As I've discussed many times, the RPI has a problem rating teams from different regions and also rating teams from different conferences in a single national system. The NCAA staff and the Committee are well aware of this. To try to address the conference problem, the NCAA has developed the Non-Conference RPI, which doesn't really solve the problem and is significantly less reliable than the RPI as a measure of teams' performance. Although the NCAA never has admitted, so far as I know, that the RPI also has a regional problem, I am well satisfied that the NCAA staff knows there is a significant problem and I suspect that the staff has tried to address it within the structure of the RPI. (For example, I believe that the reason RPI Element 2 is the average of a team's opponents' results against other teams is because the "against other teams" qualification slightly helps reduce the region and conference problem.)

    I'm guessing that the Committee will be talking about the region and conference problem at its upcoming meeting. Further, the bonus and penalty adjustments can help moderate the problem, and I'm guessing that the Committee specifically will be considering bonus and penalty variations and how they might help with the problem.

    I think it's going to be very interesting to see what comes out of the Committee's meeting -- if we ever are able to find out what the Committee decides. It could be consequential -- or not.

    One thing I know after running a great number of experiments with RPI modifications is that the RPI's basic structure is quite "rigid" and it is not possible to make reasonable modifications that will solve the region and conference problem. It might, however, be possible to make modifications, particularly to the bonus and penalty amounts, that will slightly moderate the problem.

    If anyone finds out exactly what the Committee has on its agenda or what the Committee decides, please post it on this thread (or pm me and I'll post it).
     
    Tsunami repped this.
  2. cmonyougulls

    cmonyougulls Member

    Nov 24, 2011
    Club:
    Corinthians Sao Paulo
    What do you think of the NCAA D2 model of basically only considering in-region competition and giving each region a set number of slots in the tournament? 8 regions. 6 teams from each region.

    I understand that the regions are not necessarily equal and so you are not getting the "Top 48 teams" per se, but at least each region is pretty much played out on the field.

    The biggest problem in all of this is the limited number of games with which a decision has to be made.
     
  3. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I've put together an example of the significance of the kind of decision the Women's Soccer Committee might be looking at later this month, when it reviews the RPI.

    The example is based on a comparison of (1) the Adjusted RPI as used by the NCAA for the 2011 season* to (2) an RPI that filters out the effects of game locations on teams' ratings and makes no adjustments. The latter is similar to what the NCAA does with basketball except that I use a different -- and I believe at least as accurate -- location filtration process. The example compares how regions and conferences actually performed over the last five years in relation to their teams' ratings under each system. I've explained how I compute the performance percentage elsewhere. For here, just remember that a performance percentage of 100% is the norm. A percentage above 100% means that the region's or conference's teams are outperforming their ratings; in other words the teams, on average, are underrated. A percentage below 100% means they are underperforming; in other words, the teams, on average, are overrated. Also: The performance percentages given below are that portion of the percentage that is not the result of game location imbalances.

    For regions, the numbers are as follows:

    Regions

    Middle:
    2011 NCAA ARPI: 103.6%

    Alternative: 100.0%​

    Northeast:
    2011 NCAA ARPI: 96.8%

    Alternative: 97.7%​

    Southeast:
    2011 NCAA ARPI: 101.6%

    Alternative: 103.7%​

    Southwest:
    2011 NCAA ARPI: 81.3%

    Alternative: 83.4%​

    West:
    2011 NCAA ARPI: 118.7%

    Alternative: 118.1%​

    None of these differences is massive. For conferences, however, in some cases it's a different story. For ease of entry, the first number will be the 2011 NCAA ARPI performance percentage and the second will be the Alternative percentage. I've highlighted the conferences I find most interesting:

    Conferences

    ACC: 109.3% goes to 122.3%

    SEC: 115.6% to 115.4%

    Pac 12: 114.1% goes to 111.9%

    Big East: 110.3% goes to 108.4%

    Mt. West: 105.7% goes to 108.4%

    Big West: 107.3% goes to 107.6%

    Big 10: 113.6% goes to 113.8%

    Mid American: 101.0% goes to 100.2%

    West Coast: 97.3% goes to 97.7%

    Missouri Valley: 99.0% goes to 98.9%

    Big 12: 100.4% goes to 107.3%

    WAC: 104.0% goes to 102.6%

    Horizon: 104.5% goes to 100.1%

    Northeast: 109.3% goes to 104.4%

    America East: 100.0% goes to 101.4%

    Ivy: 95.9% goes to 97.8%

    CUSA: 96.6% goes to 98.4%

    Big Sky: 102.5% goes to 102.3%

    Big South: 103.1% goes to 97.6%

    Summit: 94.3% goes to 103.5%

    Atlantic 10: 101.6% goes to 105.6%

    Colonial: 91.8% goes to 91.7%

    Patriot: 91.8% goes to 100.8%

    Atlantic Sun: 96.0% goes to 93.7%

    Metro Atlantic: 92.1% goes to 94.9%

    Sun Belt: 89.1% goes to 85.6%

    Southland: 83.9% goes to 80.0%

    Southern: 86.2% goes to 86.2%

    Great West: 101.5% goes to 88.9%

    Ohio Valley: 94.9% goes to 90.0%

    SWAC: 51.2% goes to 57.9%

    * This is the 2011 ARPI using bonus and penalty adjustments that I think were erroneous, as I've set out elsewhere.
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Hopefully, my preceding post got the attention of at least a couple of conferences -- ACC? Big 12?

    So, here's another example comparing the 2011 NCAA ARPI to a possible alternative. After many, many experiments, this is the alternative I found to be the best from an overall region/conference perspective. I base this on a search for the alternative that had the best combination of (1) least underrating for the most underrated region and conference and (2) least spread between the most underrated and most overrated regions and conferences. I also looked at how well the system performed overall in rating teams, how well it performed in the closest 10% and 20% of games, and how well it performed at rating the top 60 teams. All of this was for the five year period from 2007 through 2011.

    First, some detail on the bonus and penalty adjustments. The adjustments the NCAA used in 2011 were as follows:

    Win v Team Ranked 1-40

    Away 0.0024
    Neutral 0.0022
    Home 0.0020​

    Win v Team Ranked 41-80

    Away 0.0018
    Neutral 0.0016
    Home 0.0014​

    Tie v Team Ranked 1-40

    Away 0.0012
    Neutral 0.0010
    Home 0.0008​

    Tie v Team Ranked 41-80

    Away 0.0006
    Neutral 0.0004
    Home 0.0002​

    The penalties for poor ties and losses mirrored the bonuses, with the groupings being Teams Ranked 135-205 and Teams Ranked 206 and poorer.*

    In considering possible variations to the amounts, something that occurred to me was that the 2011 amounts (as well as amounts in earlier formulas) considered wins against teams ranked 41-80 to be better than ties against teams ranked 1-40. Yet based on the value of home field advantage, I myself have concluded that if I tie you away and another team wins against you at home, my tie statistically is as good as the other team's win -- in other words, the likelihood is that I would have beaten you if I had played you at home. In addition, it seemed to me -- and has proved true to a point -- that one way to help moderate the region and conference problem would be to increase the value of good results against teams ranked 1-40.

    Based on these considerations, I tried a number of alternatives that increased the awards for good results against teams ranked 1-40; and that increased the awards for ties so that an away tie was as valued as a home win. This process led me to the following awards, which I believe are about the best the system can do with the region and conference problem. They still leave a significant problem, but at least they do better:

    Win v Team Ranked 1-40

    Away 0.0036
    Neutral 0.0034
    Home 0.0032​

    Win v Team Ranked 41-80

    Away 0.0018
    Neutral 0.0016
    Home 0.0014​

    Tie v Team Ranked 1-40

    Away 0.0032
    Neutral 0.0030
    Home 0.0028​

    Tie v Team Ranked 41-80

    Away 0.0014
    Neutral 0.0012
    Home 0.0010​

    The penalties for poor ties and losses mirror the bonuses, with the groupings being Teams Ranked 135-205 and Teams Ranked 206 and poorer.

    Using my performance percentages, again after separating out the effects of game locations, the performances percentages moved from the 2011 NCAA RPI to my alternative as follows:

    Regions

    Middle: 103.6 to 106.1

    Northeast: 96.8 to 97.4

    Southeast: 101.6 to 99.5

    Southwest: 81.3 to 81.5

    West: 118.7 to 117.7

    As is apparent, the changes are not great. And, the great overrating of the Southwest region and great underrating of the West region are persistent. The alternative is an improvement, and is about as good as can be accomplished within the structure of the RPI, but it still leaves a serious problem.

    Conferences

    SEC: 115.6 to 114.4

    Big 10: 113.6 to 110.5

    Pac 12: 114.1 to 109.7

    Northeast: 109.3 to 109.7

    Horizon: 104.5 to 108.5

    Big West: 107.3 to 107.7

    Mtn West: 105.7 to 107.5

    WAC: 104.0 to 107.1

    Big East: 110.3 to 106.2

    Mid American: 101.0 to 105.0

    ACC 109.3 to 104.8

    Atlantic 10: 101.6 to 102.7

    Big South: 103.1 to 102.6

    Big Sky: 102.5 to 100.5

    America East: 100.0 to 100.5

    Missouri Valley: 99.0 to 100.2

    Great West: 101.5 to 97.9

    Big 12: 100.4 to 97.7

    Ivy: 95.9 to 97.4

    CUSA: 96.6 to 97.3

    Ohio Valley: 94.9 to 97.3

    West Coast: 97.3 to 96.8

    Atlantic Sun: 96.0 to 96.7

    Metro Atlantic: 92.1 to 94.3

    Colonial: 91.8 to 93.1

    Summit: 94.3 to 92.9

    Patriot: 91.8 to 91.0

    Southland: 83.9 to 89.3

    Sun Belt: 89.1 to 86.2

    Southern: 86.2 to 86.1

    SWAC: 51.2 to 55.6

    I've suggested the NCAA staff try out these amounts. I have no idea whether they will.

    *The 2009 amounts, which I believe are the ones approved by the Women's Soccer Committee and inadvertently modified by the NCAA staff in 2010 and 2011, were:

    Win v Team Ranked 1-40: A 0.0032, N 0.0030, H 0.0028

    Win v Team Ranked 41-80: A 0.0018, N 0.0016, H 0.0014

    Tie v Team Ranked 1-40: A 0.0016, N 0.0014, H 0.0012

    Tie v Team Ranked 41-80: A 0.0012, N 0.0010, H 0.0008
     
  5. luvthegame

    luvthegame Member

    Oct 17, 2005
    Cp question for you. I was having a discussion with someone the other day about RPI and the question was asked about division II or III teams on a div I teams schedule. If a div 1 team plays a lower division team how does that affect the div I teams RPI. Does the NCAA count that game in the RPI. I know that the lower division Team has no div I RPI attached to them for calculation but does the win or loss towards the div I team help or hurt their winning percentage which is a factor towards the RPI. Does the NCAA count that game? If the win does count for the teams overall winning percentage then when scheduling games maybe it would be better to schedule a div II or III team instead of maybe having to schedule a team who is div I but has a poor RPI let's say in the 200s. I have heard that some teams who have played games like that and have won that game lost RPI points because of the poor RPI team brought the strength of schedule down. What are your thoughts?
     
  6. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    The game against a lower division opponent does not count in calculating a Division I team's winning percentage or strength of schedule, so it doesn't affect what I call the unadjusted RPI, which the NCAA calls the Normal RPI. If a Division I team ties or loses to a lower division team, however, then for purposes of the good win/tie and poor tie/loss adjustments, the NCAA treats the tie or loss the same as a tie or loss to a Division I team ranked 206 or poorer. What this means is that the Division I team's tie or loss results in its Normal RPI being adjusted downward.

    Using the NCAA adjustments actually used during the 2011 season (the adjustments I think were erroneously substituted for the correct ones), the Division I team would have received the following penalty adjustments to its Normal RPI:

    Loss to lower division team: H -0.0024; N -0.0022; A -0.0020

    Tie with lower division team: H -0.0012; N -0.0010; A -0.0008

    The Division I team gets no RPI benefit whatsoever for winning against a lower division team.

    Further, it is very rare for a Division I team to win a game and have its RPI go down as a result of the game. (And conversely, it is very rare for a Division I team to lose a game and have its RPI go up as a result of the game.)

    Thus my conclusion has been that, strictly from an RPI perspective, a Division I team never should play a team from a lower division. There's a potential down side from the game and no potential up side.

    Of course, if RPI isn't important to a team, there may be plenty of other good reasons to play a lower division opponent.

    As an aside, over the last five years, no team in contention for an NCAA Tournament position has played a lower division opponent. It just doesn't happen for teams at that level in Division I women's soccer.
     
  7. luvthegame

    luvthegame Member

    Oct 17, 2005
    CP thanks for the clarfication. That makes total sense. I appreciate it. Thanks for all your good work on the RPI
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Has anyone heard a report of what went on at the DI Women's Soccer Committee's meeting this month?

    If anyone knows a Committee member, it would be really helpful if you would ask them if the Committee made any decisions and, if the Committee did, report them here. In particular, did they make any decisions about the RPI and, if so, what decisions did they make?
     
  9. cmonyougulls

    cmonyougulls Member

    Nov 24, 2011
    Club:
    Corinthians Sao Paulo
    Do not know about D1 committee, but apparently D2 may go to a RPI-type of system...does anyone know what D3 does currently for NCAA selection criteria?
     
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: 2012 RPI - Bonus and Penalty Amounts

    As mentioned on another thread, I've been comparing my Non-Conference RPI ratings and rankings to the NCAA's over the years 2007 through 2011, now that I know for certain how the NCAA computes the NCRPI. I've adjusted my program to match the NCAA's method of computation, and my ratings and rankings now match the NCAA's perfectly for both the basic RPI and the NCRPI. I've also established that the NCAA uses the same bonus and penalty amounts for the Adjusted NCRPI as it uses for the basic Adjusted RPI.

    In the course of that work, I've been able to establish the bonus and penalty amounts the NCAA used in 2007 and 2008. I previously had done this for 2009-2011. Looking at the five year evolution of the bonus and penalty amounts provides some interesting insights into that aspect of the RPI. Here are the bonus and penalty amounts over the five years, with some comments:

    2007

    Win v Team Ranked 1-40

    Away: +.0028
    Neutral: +.0026
    Home: +.0024

    Win v Team Ranked 41-80

    Away: +.0014
    Neutral: +.0012
    Home: +.0010

    Tie v Team Ranked 1-40

    Away: +.0012
    Neutral: +.0010
    Home: +.0008

    Tie v Team Ranked 41-80

    Away: +.0008
    Neutral: +.0006
    Home: +.0004

    Loss v Team Ranked 206 and poorer

    Home: -.0028
    Neutral: -.0026
    Away: -.0024

    Loss v Team Ranked 135-205

    Home: -.0014
    Neutral: -.0012
    Away: -.0010

    Tie v Team Ranked 206 and poorer

    Home: -.0012
    Neutral: -.0010
    Away: -.0008

    Tie v Team Ranked 135-205

    Home: -.0008
    Neutral: -.0006
    Away: -.0004

    [Note: The maximum bonus amount of 0.0028, on average, would have boosted a team in the 36 to 55 unadjusted RPI ranking range by an average of 2.25 positions in the rankings. This is the critical area, because it typically is the bubble area for at large selections for the NCAA Tournament.]

    2008

    Same bonus and penalty amounts as for 2007.

    [Note: In 2008, as distinguished from 2007, the maximum bonus amount of 0.0028, on average, would have boosted a team in the 36 to 55 URPI ranking range by an average of 4.1 positions in the rankings. This illustrates the variability of the dispersal of the ratings from one year to the next.]

    2009

    In 2009, the NCAA increased the amounts of the bonuses and penalties, but kept the same basic format. The increase was by 0.0004 at every level. The 2009 amounts thus were:

    Win v Team Ranked 1-40

    Away: +.0032
    Neutral: +.0030
    Home: +.0028

    Win v Team Ranked 41-80

    Away: +.0018
    Neutral: +.0016
    Home: +.0014

    Tie v Team Ranked 1-40

    Away: +.0016
    Neutral: +.0014
    Home: +.0012

    Tie v Team Ranked 41-80

    Away: +.0012
    Neutral: +.0010
    Home: +.0008

    Loss v Team Ranked 206 and poorer

    Home: -.0032
    Neutral: -.0030
    Away: -.0028

    Loss v Team Ranked 135-205

    Home: -.0018
    Neutral: -.0016
    Away: -.0014

    Tie v Team Ranked 206 and poorer

    Home: -.0016
    Neutral: -.0014
    Away: -.0012

    Tie v Team Ranked 135-205

    Home: -.0012
    Neutral: -.0010
    Away: -.0008

    [Note: If the NCAA had used the maximum bonus amount of 0.0032 in 2008, it would have boosted a team in the 36 to 55 URPI range, on average, by 4.15 positions (as compared to 4.1 positions using the actual 2008 amounts). In 2007, the boost would have been 2.45 positions (as compared to 2.25 positions using the actual 2007 amounts). Over the two year period, the average change in the maximum bonus amount would have been 3.3 positions under the 2009 amounts as compared to 3.175 positions under the 2007 and 2008 amounts.]

    I have been advised by the NCAA staff that the Women's Soccer Committee approves the numbers of places to be gained or lost when a bonus or penalty is applied. With that instruction from the Committee, the staff then installs bonus and penalty amounts intended to achieve those numbers of places gained or lost. They do this using the unadjusted RPI from the previous season as the reference point. And, in each succeeding year, they recalibrate the bonus and penalty amounts based on the URPI for the previous year. Because of this process, the amounts may vary from one year to the next, but in most cases the changes are going to be no more than 0.0001. I do not know exactly how the recalibration process works.

    With regard to the change from the 2007 and 2008 amounts to the 2009 amount, it either would have had to be approved by the Committee or would have had to fall within the recalibration procedure described in the preceding paragraph. I'm not aware of the Committee having approved a change, although I can't say for certain that they didn't. On the other hand, it is conceivable to me that the Committee set up the 2007 and 2008 system a good time earlier and the NCAA staff did not recalibrate the amounts (either inadvertently or because it was not until recently that the recalibration process was established) for a number of years and then recalibrated them using 2008 as a baseline. I'm not sure how the recalibration process works for the entire array of bonuses and penalties, but if it's similar to what I described above in relation to the maximum bonus amount of 0.0028, it appears that the Committee's instruction is that the maximum bonus amount, for an away win against a top 40 team, should be worth a position jump of slightly over 4 positions.

    2010

    The fun began in 2010. For the early part of the season, the NCAA used the 2009 bonus and penalty amounts. In mid-season, however, the NCAA started using different amounts. I'm not aware of any Committee decision approving this and I think it was an inadvertent error, although the NCAA staff says there was no error. (I've written about this extensively, previously.) Whatever the reason, the amounts the NCAA used in the ratings provided to the Committee for its Tournament decisions were:

    2007

    Win v Team Ranked 1-40

    Away: +.0026
    Neutral: +.0024
    Home: +.0022

    Win v Team Ranked 41-80

    Away: +.0020
    Neutral: +.0018
    Home: +.0015

    Tie v Team Ranked 1-40

    Away: +.0013
    Neutral: +.0011
    Home: +.0009

    Tie v Team Ranked 41-80

    Away: +.0007
    Neutral: +.0004
    Home: +.0002

    Loss v Team Ranked 206 and poorer

    Home: -.0026
    Neutral: -.0024
    Away: -.0022

    Loss v Team Ranked 135-205

    Home: -.0020
    Neutral: -.0018
    Away: -.0015

    Tie v Team Ranked 206 and poorer

    Home: -.0013
    Neutral: -.0011
    Away: -.0009

    Tie v Team Ranked 135-205

    Home: -.0007
    Neutral: -.0004
    Away: +.0002

    [Note: I've put the amount for an away tie against a team ranked 135-205 in bold face because the NCAA actually awarded a bonus for this rather than imposing a penalty. Whatever else happened, this clearly was an unintended mistake.]

    2011

    Win v Team Ranked 1-40

    Away: +.0024
    Neutral: +.0022
    Home: +.0020

    Win v Team Ranked 41-80

    Away: +.0018
    Neutral: +.0016
    Home: +.0014

    Tie v Team Ranked 1-40

    Away: +.0012
    Neutral: +.0010
    Home: +.0008

    Tie v Team Ranked 41-80

    Away: +.0006
    Neutral: +.0004
    Home: +.0002

    Loss v Team Ranked 206 and poorer

    Home: -.0024
    Neutral: -.0022
    Away: -.0020

    Loss v Team Ranked 135-205

    Home: -.0018
    Neutral: -.0016
    Away: -.0014

    Tie v Team Ranked 206 and poorer

    Home: -.0012
    Neutral: -.0010
    Away: -.0008

    Tie v Team Ranked 135-205

    Home: -.0006
    Neutral: -.0004
    Away: -.0002

    [Note: The 2011 amounts look like a correction of the 2010 penalty mistake combined with a recalibration of the 2010 amounts.]

    Obviously, there was a major shift from the 2007-2009 amounts and their arrangement to the 2010-2011 amounts and their arrangement. Despite several opportunities to do so, the NCAA never has advised me that the Committee approved any changes of this nature. That's why I think the current amounts were introduced by mistake. Nevertheless, the history provides some good insights into how the process of setting the amounts is supposed to work.
     
  11. midwestfan

    midwestfan Member

    Dec 31, 2011
    Club:
    Tottenham Hotspur FC
    Re: 2012 RPI - Bonus and Penalty Amounts

    CP, I find this fascinating but to be honest somewhat confusing.

    If you were to simplify this into assesing which conferences should get how many teams into the tournament, how might that shake out.

    Iv'e always found it interesting that teams that have strong seasons in mid major conferences get little respect over some of the more traditionally strong conferences. Being from the midwest I would like to see more than the typical one team from the MAC, Horizon, Summit or MVC get in. Usually you can find at least one team that should be considered (CMU this past season). The same goes for other non traditionally strong conferences.

    So my question is how does the NCAA determine who goes and stays, because they don't look at the RPI and pick the next strongest teams or at least it dosen't seem so, so why have it?
     
  12. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: 2012 RPI - Bonus and Penalty Amounts

    I'll give you a relatively short answer here. If you want a discussion in depth, go to the following website and, using the menu on the left, go through, in order, the webpages whose title begins with "NCAA Tournament: ....": https://sites.google.com/site/rpifordivisioniwomenssoccer/

    The Women's Soccer Committee uses the RPI to determine which teams are bubble teams in the selection process -- in other words, which are the teams that might get in and might not. The way to think of the bubble is as the last team that would get in if the Committee used only the Adjusted RPI, plus the 7 teams ranked just above it and the 7 teams ranked just below it. (It's slightly more complicated than that, but that's good enough for a short response.) The idea of this is that the RPI is not accurate enough to really be able to distinguish those 15 teams from each other -- nor would any other statistical rating system be accurate enough to do that.

    Once you've defined the 15 bubble teams, then you look at two other factors. The first factor is any head-to-head games among those teams. The other factor is the results of games any of those teams had against common opponents.

    Once you've looked at those three things - RPI, head-to-head results, and results against common opponents -- you see if you are ready to make a selection of which eight teams will get in and which will not.

    If you aren't ready to make a decision yet, then you look at two other factors. The first factor is results over the last eight games, taking into consideration both record over those games and strength of opponents. The second factor is results over the entire season against teams already selected for the bracket including the conference champion automatic qualifiers so long as they are ranked #75 or better by the ARPI.

    When you've considered all that, you make a decision.

    When I've gone through that process myself over the last three years, my at large selections have matched the Committee's.

    In looking at the RPI, the Committee can consider a whole bunch of subsets of information, although I don't think they really make much difference. The subsets are things like the Non-Conference RPI (which actually may make some difference), the conference average RPI, record in away games, winning percentage and winning percentage rank, opponents' average winning percentage and its rank, and opponents' opponents' winning percentage and its rank, etc., etc.

    The key is to this is that once the Committee has used the RPI to identify the bubble teams, the importance of the RPI recedes considerably and the other factors come into play.

    While one can argue whether the RPI is the right statistical system, I think it's hard to argue that there's something wrong with the other factors.

    You'll note that there's nothing in the process related to how many teams a conference ought to get into the Tournament, other than the one automatic qualifier per conference. The at large selection process is intended to have the at large selections be the teams that have performed the best over the course of the season following identification of the automatic qualifiers, regardless of how many do or do not come from any one conference. I believe this is the NCAA's approach in all of its championships.

    I hope that helps.
     
  13. midwestfan

    midwestfan Member

    Dec 31, 2011
    Club:
    Tottenham Hotspur FC
    Re: 2012 RPI - Bonus and Penalty Amounts

    Thanks. Not the answer I'm looking for but it does better explain it and thanks for the link.

    Having skimmed through the criteria for their decisions I couldn't help wonder how preseason games and schedules come into play.

    I have spoken to a few mid major college coaches and most are trying to schedule games with teams above their level to better prepare themselves for their seasons. But do some of the top teams try to avoid playing stronger mid major teams knowing that it could compromise a potential NCAA wildcard berth spot. Typically the last eight games of the season are going to be in conference games, so I'm not sure how going 4-4 in the ACC of SEC compares to going 7-1 in the MVC or Horizon (obviously if you go 8-0 you've won your conf. tourney and are an automatic).
     
  14. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: 2012 RPI - Bonus and Penalty Amounts

    Again for a really full explanation, use the link I provided and go to the last of the NCAA Tournament pages, I think it's the one called Scheduling Towards the Tournament. That's actually intended as a resource for coaches in scheduling. But, to give a short answer:

    When scheduling a strong mid-major, a top BCS team has two considerations:

    1. The strong mid-major probably will have a very good record, which will help the BCS team's strength of schedule. (If the top mid-majors start scheduling a lot of very strong opponents, however, they may end up with poorer records than in the past, which might make their records, and thus their contributions to strength of schedule, not so good.)

    2. If the BCS team wins against the mid-major, overall it will be a big plus for the BCS team's RPI because it will help its winning percentage and its strength of schedule. But, if the BCS team loses, the overall impact on its RPI will be negative. In fact, if it is going to lose, the BCS team would be better off playing a weaker mid-major team and winning than playing a stronger mid-major and losing.

    So, the BCS team must calculate the likelihood of its winning. If it feels confident it can win, then there is a big RPI incentive for it to schedule the strong mid-major. I'm not sure how aware of this the BCS teams were several years ago, but I'm pretty sure most of them are well aware of it now. In fact, if you read the "Scheduling Towards the Tournament" page and look at the link to the nc-soccer page provided there, and if you want to spend a lot of time figuring the pages out, you can begin to get a picture of who you want to schedule for RPI purposes in your out-of-conference games.

    Also, however, for at large selection purposes, you can't just schedule to the RPI. You also have to schedule enough teams that will be in the top 20 or so that you have a chance of gaining a win or tie against at least one of them and hopefully even more. This is because of the secondary criterion of "record against teams already selected for the Tournament." Under that criterion, what is important is demonstrating some very good wins or ties.

    Thus scheduling is something of an art, balancing RPI considerations against very good win/tie considerations.

    It sounds, by the way, like the mid-major coaches you've talked to are doing the right thing. They need to schedule some very strong non-conference opponents in order to get the possibility of one or two really good wins/ties. They have to hope their strong non-conference record won't hurt them too much in terms of wins and losses. And, they have to perform really well once the conference season starts. That gives them the best shot into the Tournament. A big problem some mid-majors have had in recent years is getting into the bubble based on their RPI, but having no good wins/ties to get them from the bubble into the Tournament. Of course, if your mid-majors lose all their games against very strong non-conference opponents, that will hurt. It's simply a risk they have to take.
     
  15. midwestfan

    midwestfan Member

    Dec 31, 2011
    Club:
    Tottenham Hotspur FC
    Re: 2012 RPI - Bonus and Penalty Amounts

    After sending off the reply I did read the lower pages. Sorry you had to explain again, but I appreciate it as it does help clarify what I read.
    I was always under the impression that there was a little favoritism going on but it all makes better sense now. I guess the bottom line is the mid majors have to start strengthening their squads and schedules and hope to be able to sustain it for some time so that their conferences start moving into the higher RPI ranks.
     
  16. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: 2012 RPI - Bonus and Penalty Amounts

    Correct. Also, it needs to be a total conference effort. It's not enough for the top half of the conference to make the effort, all teams need to. From what I've heard, this is something the West Coast Conference figured out and the Conference makes a major effort to have all teams working on "upgrading." It takes a constant effort. But, in my opinion it is possible.
     
  17. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Based on some input I received on the RPI for Division I Women's Soccer website, I re-visited the "conference tournament or no conference tournament" question and did a much more sophisticated study than I had done before, using data from the 2011 season. For those interested, I have reported the results of that study on the RPI website, here:

    https://sites.google.com/site/rpifordivisioniwomenssoccer/effect-of-conference-tournaments

    I believe those of you who are interested in this question may find what I have reported interesting and, hopefully, useful if you are involved in a conference's "tournament or no tournament" decision-making process.
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    There's good news about the RPI information it appears the NCAA will be making available during the course of this Fall's season. It's not as good as it could be but still is quite good. In saying this, I'm assuming the NCAA will make the same information available for soccer as it is making available for baseball's Spring 2012 season.

    At the NCAA.com website, it appears that the NCAA will continue to provide weekly RPI ranking reports just as it did last year. These reports do not show RPI ratings but rather only show teams' RPI rankings based on their ratings. There is nothing new about this. The link to the Division I Women's Soccer page is: http://www.ncaa.com/sports/soccer-women/d1. If you click on Rankings in the menu near the top of the page, you will go to the most recently released RPI ranking report.

    At the NCAA.org website (not the NCAA.com website), however, the NCAA now has a new RPI page. The page has some general information about the RPI and also a set of Frequently Asked Questions, with answers. (If you use the Print command at the bottom of the page, you can print out both the general information and the answers to all the FAQs.) Also, however, and most important, at the top of the RPI page there are links to RPI reports, organized by season and, within each season, by sport. Based on what the NCAA has provided for baseball this Spring, it appears that for each sport that uses the RPI, the NCAA weekly will be providing two reports. One is called the Nitty Gritty report and the other is called the Team Sheets report. The key report is the Team Sheets report, which has a sheet for each team with details about its record and which includes the team's Adjusted RPI rating. The link to this NCAA RPI page is: http://ncaa.org/wps/wcm/connect/public/NCAA/Championships/NCAA Rating Percentage Index/.

    The NCAA RPI page will have only the most recently issued Nitty Gritty and Team Sheets reports. At the NCAA's RPI Archive page, however, it will be possible also to access the Nitty Gritty and Team Sheets reports issued earlier in the season. The link to the NCAA's RPI Archive page is: https://rpiarchive.ncaa.org/default.aspx.

    Unfortunately, based again on baseball, it appears the NCAA, as in the past, will not provide the Nitty Gritty and Team Sheets reports that include data from the last week of the regular season until well after completion of the NCAA Tournament. These are the "Selection" reports that the Women's Soccer Committee uses in making its at large selection and seeding decisions. Thus it appears those interested in getting end-of-season ratings in advance of the Committee's decisions still will need to go to the nc-soccer website or the RPI for Division I Women's Soccer website for the ratings.

    Also, there are three other reports the NCAA also produces. It appears that these reports also will not be available until after completion of the Tournament. These are the Team Rankings reports (one based on all games and one on non-conference games), the League Ranking reports (all games and non-conference games), and the Team by Team Conference reports (all games and non-conference games). Of these, the Team Rankings report is the most important, because it shows the three Elements of the RPI, the unadjusted RPI, the Adjusted RPI, the unadjusted Non-Conference RPI, the Adjusted NCRPI, the number of bonus awards, and the number of penalty awards as well as other detailed information about each team. It appears these reports, as well as the end-of-regular-season Nitty Gritty and Team Sheets reports ultimately will be available via the RPI Archive page, but not at the time the Committee is making its decisions. When they are made available, it appears that there will be a "Selection" version of each report that shows the data the Committee had for its decision-making process as well as a "Final" version that includes in its database the results of the NCAA Tournament games.

    It's disappointing that the NCAA appears to be delaying release of the "Selection" reports until after the NCAA Tournament, but it nevertheless is great that they appear now to be providing actual ratings during the course of the season.
     
  19. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here is some of what appears in the NCAA's responses to Frequently Asked Questions, which appear on the NCAA.org's Rating Percentage Index page. In general, I think the NCAA has done well with the FAQs, but they come up short in some respects.

    "If a committee member is evaluating two or more teams, a wide difference in RPI rank can be a factor.

    "How 'wide' is 'wide'? A very generic rule of thumb is 20 or more ranking places, in addition to the actual mathematical difference between RPI rankings. But since every circumstance is different, these ranges can vary quite significantly from sport to sport and year to year."

    In response to the question why the RPI doesn't factor in past NCAA Tournament success, the NCAA states its general policy of considering only current players and not past successes achieved by other players. It goes on:

    ".... No selection process at any level of sport, collegiate or professional, uses past success as a factor in determining participation in a playoff or championship."

    This last statement is not true. World Cup allocation of numbers of teams to geographic areas is based on results over many years (decades?). Grand Slam tennis tournaments occasionally provide wild card positions to certain players based on past success, particularly to players out due to injuries. The NCAA has been making this statement for many years, it isn't true, and it isn't necessary to their argument as to which past NCAA Tournament success should not be a factor. I agree with that argument, but not with the above statement.

    In response to the question "Can the current RPI formula cause a 'regional bias' in the team or conference rankings?"

    ".... If the number of potential teams is small, the possibility that these teams will 'beat up each other' could mean that there would be fewer outstanding records to catch the committee's eye, resulting in less at-large selections for teams from that part of the country.

    "Mathematically, it certainly can be argued that with fewer teams available it is possible all the teams in that region could 'bunch up' with similar records. Those in other parts of the country, however, could argue that if the great majority of these teams are strong clubs, that also reduces the opportunity to play very weak teams that hurt the strength of schedule element of the RPI."

    In my view, this response is pretty much bogus. It addresses the issue of a high level of parity within a region and its possible effect on ratings. That is not the basis, however, for the RPI's regional problem. The question isn't whether there is parity in one region and lack of parity in another. Rather, the question is whether one region is stronger than another, whether or not there is parity in either region. Due to the limited number of inter-regional games (a "lack of correspondence" among regions), the RPI tends to underrate regions that on average are strong and to overrate regions that on average are weak. This is an inherent problem not just of the RPI but of any statistical rating system. I am sure the NCAA rating experts know this, but as the above FAQ response shows, they consistently refuse to acknowledge it and instead obfuscate when they get to this issue.

    In response to the question "Why doesn't the RPI factor in the various national polls ...?"

    "While ... polls [are not] part of the RPI formula, the committee does receive this data from the NCAA staff as part of the entire package they are given for the selection meeting."

    Although the above statement is true for basketball, it is not true for women's soccer. Indeed for women's soccer, the NCAA explicitly states that polls are not considered.

    I don't mean to be too critical, as the NCAA has made some giant steps with the RPI information it makes public. Hopefully, they will refine and improve the FAQs over the next year or so.
     
  20. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    For those interested -- the number of non-Division I games is slightly up this year from last year, going from 53 to 62 games. The most interesting aspect of this is that North Carolina and Duke appear in the group of teams playing non-DI opponents with their games against Montreal. From an RPI perspective, there is no benefit to playing a non-DI opponent. If a DI team ties or loses to a non-DI opponent, however, it occurs maximum penalties under the bonus/penalty adjustment process. I know nothing about Montreal's program, but presumably both NC and Duke will beat them.
     
  21. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006

    This isn't even true with the NCAA itself in its two biggest sports.

    It untrue in FBS rankings for Football bowl games. Every team is rated before the season starts based on past success and graduated players and new recruits, and it is much more difficult to get a top rating at the end of the seasonif your initial ranking is low. Past a certain point (Top 10?) it is nearly impossible to get a # 1 ranking by the end of a season. Teams don't drop rankings unless they lose. The first polls were just this week.

    Basketball also starts with rankings after only a few weeks, and teams have to lose to fall in the ranking.

    So the NCAA isn't being candid about it two biggest revenue sports.
     
  22. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    One of the things I've wondered is whether the ongoing conference realignment process will result in teams playing even more games within their regional playing pools than they have in the past. With all teams' schedules now available, I've been able to take a look at that question and can report that I don't think that's occurred for the 2012 season.

    For those new to the regional playing pools, I've identified five primarily (but not totally) geographic pools within which teams play. The pools are important, because under any mathematical rating system, the system will tend to rate each pool as of equal strength with each other pool. This is a problem if the regional pools are not of equal strength, and in fact the five pools are not of equal strength. Inter-pool games moderate this tendency, but they don't fully off-set it, so that the RPI and other rating systems will tend to overrate teams from weaker regions and to underrate teams from stronger regions. Inter-pool games moderate this tendency, and the more inter-regional games the better.

    Here are the percents of inter-pool games over the last five years by region, as compared to the inter-pool games for the upcoming 2012 season:

    Middle Region (includes the Big 10): last five years ~24%; 2012 ~25%

    Northeast Region (includes the Big East): last five years ~15%; 2012 ~ 15%

    Southeast region (includes the ACC and the SEC): last five years ~23%; 2012 ~24%

    Southwest region (includes the Big Twelve): last five years ~24%; 2012 ~27%

    West region (includes the Pac 12 and the WCC: last five years ~20%; 2012 ~19%

    I'll note that these are not enough inter-regional games to overcome the rating systems' tendencies to rate different regions as equal even if, in fact, they are not equal. Of particular note is the Northeast region's low 15% inter-regional games. Almost a third of the Northeast region's teams play no inter-regional games. Here are the numbers of teams in each region and the numbers that play no inter-regional games:

    Middle: 58 teams, 1 with no inter-regional (2%)

    Northeast: 91 teams, 27 with no inter-regional (30%)

    Southeast: 62 teams, 2 with no inter-regional (3%)

    Southwest: 54 teams, 5 with no inter-regional (9%)

    West: 58 teams, 1 with no inter-regional (2%)
     
  23. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here are some RPI-based numbers about home field advantage, developed as part of a much bigger project I'm working on.

    The numbers have to do with the boost a team gets from home field advantage and the drag a team gets from playing away, expressed in rating difference terms. In other words, a team performs as though its RPI rating is higher when it plays at home and as though its RPI rating is lower when it plays away. What is the extent of the difference between its actual RPI and its "effective" RPI at home and away?

    The extent of the difference varies from the RPI to the Non-Conference RPI. It also varies depending on the adjustments the NCAA makes in arriving at the Adjusted RPI and Adjusted Non-Conference RPI. Here are the numbers:

    Unadjusted RPI: Home team performs as though its rating were 0.009 higher and away team performs as though its rating were 0.009 lower.

    ARPI using NCAA's 2011 bonus and penalty adjustments: Home 0.008; Away -0.008

    ARPI using NCAA's 2009 bonus and penalty adjustments: Home 0.009; Away -0.009

    Unadjusted Non-Conference RPI: Home 0.011; Away -0.011

    ANCRPI: Home 0.012; Away -0.012

    To a great extent, these differences reflect the fact that each version of the RPI has a particular range of ratings. The NCRPI has a wider range of ratings than the RPI; and the adjusted versions generally have a wider range of ratings than the unadjusted versions.

    I've also done similar calculations using something I call the "Pure" RPI and NCRPI. For the NCAA's version of the RPI and NCRPI, in computing strength of schedule for Team A, the NCAA looks at Team A's opponents' results against teams other than Team A. In the Pure RPI and NCRPI, I use the same formula for determining strength of schedule except that in looking at Team A's opponents' results I include their results against Team A. This produces ratings with narrower ranges and thus the home-away differences are less.
     
  24. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    In a couple of weeks, the NCAA will start releasing its RPI reports. I don't know the exact schedule yet, but should within about a week or a little more. In the meantime, you can follow how the RPI develops over on the nc-soccer website if you want, although at this stage of the season looking at RPI ratings is strictly educational since there haven't been enough games for the actual rating numbers to be meaningful. I'll start publishing weekly downloadable RPI reports on the RPI for Division I Women's Soccer website about the same time the NCAA starts publishing its reports.

    For quite a while now, I've been working on potential improvements to the RPI, but still using the NCAA's basic format. All my research is based on five years' data from 2007 to 2011 (slightly over 15,000 games). What follows is the results of my research and a set of changes that I believe would result in an improved RPI.

    As I've written before, the RPI's biggest problem is rating teams from different playing pools within a single national system. There are two sets of primary playing pools within which teams tend to play: (1) the conferences and (2) five mostly regional pools I've identified based on the groups within which teams played the majority of their games or, for the relatively few teams that didn't play a majority in any group, within which they played the predominant number of their games, over the last five years. As I've written, my research shows that under the NCAA's current formula, on average, teams from strong conferences and strong regions (as measured by their average Adjusted RPIs) tend to outperform their ratings (i.e., are underrated) and teams from weak conferences and weak regions tend to underperform (i.e., are overrated). Some of the underrating and overrating is due to home field imbalances, but most of it is not. The underrating/overrating problem is consistent with what theory predicts should be the case. My objective was to come up with revisions to the RPI that will (1) produce ratings that minimize or eliminate the conference and regional playing pool problems and (2) produce ratings at least as otherwise accurate as the NCAA's current ratings.

    I saw three potential areas for revisions to the current formula:

    1. Element 2 of Team A's RPI is based on Team A's opponents' winning percentages against teams other than Team A. One potential revision is to eliminate the "against teams other than Team A" limitation. I call the RPI without this limitation the "Pure" RPI.

    2. The second potential revision is to change the bonus/penalty structure for good wins/ties and poor losses/ties. I tested a very large number of potential revisions.

    3. The third potential revision is to establish adjustments based on the average RPIs of the conferences and regions. I tried a variety of adjustments for both conferences and regions.

    Overall, I tried variations based on different combinations of all three of these potential revisions. A fourth potential revision would be to attempt to make adjustments based on home field imbalances. For reasons I'll state below, I concluded these would not be good adjustments.

    I tested the potential revisions using a method that took out of play the effects of home field advantage, since strong conferences and regions tend to have favorable home field imbalances. I also set up the region adjustments in a way that would be comparable to the way the NCAA currently awards the bonus/penalty adjustments. And, since the NCAA uses the same bonus/penalty amounts for both the ARPI and the ANCRPI, I looked for the best set of potential revisions for the RPI and NCRPI as a pair, rather than selecting one set of revisions for the RPI and another set for the NCRPI.

    My conclusion is that there is a set of revisions that would pretty much solve the conference and regional playing pool problems while keeping the RPI and NCRPI at least as accurate as they otherwise are with the current NCAA formula. The set of revisions is:

    1. Convert to the Pure RPI and Pure NCRPI;

    2. Revert from the NCAA's 2011 bonus and penalty amounts to the NCAA's 2009 bonus and penalty amounts but otherwise don't change the bonus and penalty structure. Thus the amounts would be 32-30-28, 18-16-14, 16-14-12, 12-10-8 and would be based on teams' Pure URPI rankings.

    3. Establish a region adjustment as follows:

    If a regional pool's average Pure URPI is >= 0.515, then the region's teams receive an adjustment of +0.008

    If a regional pool's average Pure URPI is < 0.515 but >= 0.505, then the region teams' adjustment is +0.004

    If a regional pool's average Pure URPI is < 0.505 but >= 0.495, then the region teams' adjustment is 0

    If a regional pool's average Pure URPI is < 0.495 but >= 0.485, then the region teams' adjustment is -0.004

    If a regional pool's average Pure URPI is < 0.485, then the region teams' adjustment is -0.008

    Each year, at the beginning of the season, there would be a review of teams' schedules to identify which of the five regions the teams belong in, based on where they play the majority of their games or, for the few that don't play a majority in any region, based on where they play the predominant number of their games. (This would allow the system to take into consideration, among other things, conference realignments.)

    For both the ARPI and the ANCRPI, as is the case with the bonus and penalty awards, the regional adjustments are based on the URPI (but the Pure version) (and not the UNCRPI).

    I do not think the NCAA should make adjustments to take into account home field imbalances, even though there are imbalance patterns and they favor strong conferences and regions. The problem is that it is not possible to make enough adjustments using the bonus and penalty awards and the region adjustments to fully mitigate the conference and regional playing pool problems. The system actually needs the rating benefits that strong conferences and regions get from the home field imbalances to partially mitigate the conference and region problems. In fact, I have a strong feeling that the NCAA staff is fully aware of this and it explains why the formula for women's soccer ignores game locations.

    As compared to the current NCAA formula including its 2011 bonus and penalty adjustment amounts, the improved ARPI and NCRPI are just as accurate generally, so there would be no degrading of accuracy in a change.

    Regarding the conferences problem:

    1. The current and improved ARPI both still have some problem, but the problem is at the same level for both of them. It's not a huge problem but has some significance.

    2. The current and improved NCRPI do not have a conference problem. In other words, they both solve the conference problem as I have identified it. This, of course, is why the NCAA developed the NCRPI and, for conferences, it works.

    3. There always are going to be some overrated and some underrated conferences, my objective having been to reduce or eliminate any overrating or underrating caused by the problem the RPI has rating strong conferences and weak conferences in a single system. I also, however, did not want to propose revisions that would cause the extents of the remaining overratings and underratings to be significantly greater than the extent within the current system. The extent is measured simply by the spread between the performance level of the best performing conference in relation to its rating and the performance level of the poorest performing conference. The improved URPI somewhat reduces the spread in comparison to the current URPI, which is good; but the improved ARPI somewhat increases the spread by a roughly equivalent amount. The improved UNCRPI and the current UNCRPI have about the same spread. The improved ANCRPI, however, reduces the spread in comparison to the current ANCRPI, with the reduction being greater than the URPI and ARPI differences.

    Regarding the regional playing pool problem:

    1. The improved ARPI still has a slight problem, but not enough to be a major concern and a very much smaller problem than the current ARPI.

    2. The improved ANCRPI has an even slighter problem and a very much smaller problem than the current ANCRPI.

    3. As with conferences, there always are going to be some overrated and some underrated regions. The performance spreads for both the ARPI and the ANCRPI are much smaller for the improved RPI than for the current RPI.

    I've updated the RPI for Division I Women's Soccer website to explain all of this in great detail, describing my methodology and providing lots of details. The actual report on my work to improve the RPI is on this page, although other pages include a lot of related information: https://sites.google.com/site/rpifordivisioniwomenssoccer/modified-rpi

    I also included on that webpage a comparison of the ARPI and ANCRPI 2011 ratings for the Top 60 teams under the current RPI and under the improved RPI, plus my analysis of whether there would have been any effects on the at large selections for 2011. The differences between the two systems are not huge, but in some cases they could make a difference. Based on my model for how the selection system works, which pretty much has produced results that match the NCAA's selections, I think there would have been one difference in the at large selections in 2011: NC State would have gotten an at large selection, because it's rating under the improved RPI would have put it in the group ranked high enough that it would have been automatically "in" without going through the bubble selection process. Texas, as the last bubble team to get an at large selection under my model, would have been "out." Essentially, NC State was underrated by the current RPI and therefore was "out" simply because the Southeast region was very strong in 2011; and if NC State had been rated by a properly region-calibrated system, it would have been "in."

    As we go through the season, I'll be providing "improved" ratings as well as NCAA ratings in order to allow comparisons. This will let people see which NCAA Tournament candidate teams, if any, are negatively affected by the NCAA's not having a properly region-calibrated system. For a couple of reasons, it probably isn't realistic to think the NCAA would move to a properly region-calibrated system. For one thing, the NCAA would have to admit that the RPI has a regional problem (as does any system based on games data). Although I am confident the NCAA knows the RPI has this problem, it never has been willing to acknowledge it publicly. For another thing, moving to a properly region-calibrated system would be politically difficult since there would be winners and losers from a regional and individual team perspective.
     
  25. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    It's too early for teams' RPI ratings and rankings to have a lot of significance, since the RPI formula is calibrated so as to be really useful only at the end of the regular season. For fun and for illustrative purposes, though, I've done a ranking comparison of the NCAA's ARPI (assuming no as yet unannounced changes in the formula from last year) to the Improved ARPI as discussed in detail in the preceding post. The Improved ARPI uses the "Pure" RPI Element 2, the 2009 NCAA bonus and penalty adjustment amounts for good wins/ties and poor losses/ties, and regional adjustments designed so that teams from the different regions on average perform more closely in accord with their ratings than under the NCAA's formula. Based on the regions' average URPIs at this stage in the season, the regional adjustments described in the preceding post are as follows (region/average URPI/regional adjustment):

    Middle 0.4828 -0.008
    Northeast 0.4945 -0.004
    Southeast 0.5046 0.000
    Southwest 0.4819 -0.008
    West 0.5250
    0.008

    This produces rankings for the top group of teams as follows. I'm showing 73 teams because that group includes the top 55 of both the NCAA's ARPI and the Improved ARPI. The top 55 represents the teams likely to be in consideration for the NCAA Tournament. This really is for illustration only at this stage of the season, since I expect the differences will be smaller by the time the regular season is over. The three numbers with each team are (1) NCAA ARPI Rank, (2) Improved ARPI Rank, (3) the difference between the two ranks.

    FloridaState 1 2 -1
    UCLA 2 1 1
    UCF 3 4 -1
    WashingtonState 4 3 1
    MississippiU 5 8 -3
    PennState 6 7 -1
    WashingtonU 7 5 2
    LongBeachState 8 6 2
    SanDiegoState 9 9 0
    Louisville 10 11 -1
    PortlandU 11 13 -2
    Stanford 12 10 2
    TennesseeU 13 16 -3
    BYU 14 12 2
    WisconsinU 15 19 -4
    VirginiaU 16 15 1
    PennsylvaniaU 17 25 -8
    NorthCarolinaU 18 14 4
    Quinnipiac 19 17 2
    TexasTech 20 18 2
    LouisianaTech 21 21 0
    Duke 22 22 0
    KentuckyU 23 24 -1
    Harvard 24 33 -9
    VirginiaTech 25 28 -3
    FloridaU 26 23 3
    IllinoisU 27 31 -4
    NorthwesternU 28 32 -4
    BostonCollege 29 29 0
    GeorgiaU 30 27 3
    Baylor 31 39 -8
    NotreDame 32 30 2
    CentralMichigan 33 37 -4
    SanDiegoU 34 20 14
    Yale 35 43 -8
    Pepperdine 36 26 10
    OhioState 37 35 2
    ColoradoCollege 38 40 -2
    LaSalle 39 38 1
    MissouriU 40 48 -8
    UNCWilmington 41 42 -1
    MississippiState 42 50 -8
    Dartmouth 43 36 7
    ColoradoU 44 46 -2
    MarylandU 45 41 4
    Memphis 46 51 -5
    IowaU 47 60 -13
    LSU 48 44 4
    William&Mary 49 52 -3
    CalStateNorthridge 50 34 16
    MinnesotaU 51 57 -6
    JamesMadison 52 53 -1
    Providence 53 64 -11
    Marquette 54 56 -2
    OklahomaState 55 65 -10
    MiddleTennessee 56 63 -7
    TexasA&M 57 62 -5
    Georgetown 58 71 -13
    UNCGreensboro 59 49 10
    StonyBrook 60 74 -14
    Furman 61 61 0
    MassachusettsU 62 47 15
    UCDavis 63 55 8
    Campbell 64 66 -2
    IllinoisState 65 81 -16
    WakeForest 66 80 -14
    Gonzaga 67 58 9
    FloridaGulfCoast 68 72 -4
    SantaClara 69 45 24
    ConnecticutU 70 69 1
    AlabamaU 71 70 1
    WesternMichigan 72 85 -13
    FresnoState 73 54 19
     

Share This Page