NCAA RPI Error Affected 2010 and 2011 Ratings

Discussion in 'Women's College' started by cpthomas, Nov 22, 2011.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    As those of you who have followed the RPI threads for last year and this year know, I have believed for over a year that in mid-season 2010, the NCAA staff inadvertently introduced an error into the NCAA’s computation system for Division I women’s soccer. Unfortunately, since the NCAA kept the RPI ratings secret (as distinguished from the rankings) and also kept the amounts of the Adjusted RPI bonus and penalty adjustments secret, I couldn’t prove conclusively that I was right.

    A few weeks ago, the NCAA began making public a great deal of prior season RPI-related information, including actual ratings. In addition, due to nc-soccer’s discovery of a problem in the NCAA’s 2011 end-of-regular-season RPI ranking report, involving the teams’ listed win-loss-tie records, the NCAA provided me with its Team Sheets and Nitty Gritty report for the 2011 season. These were pieces of information provided to the Women’s Soccer Committee for their Tournament bracket formation process and the NCAA staff provided them to me to demonstrate that the win-loss-tie records provided to the Committee were correct. The Team Sheets, among other things, included teams’ Adjusted RPI ratings for the 2011 season. (According to the NCAA staff, both of these documents now are public documents that will be released on line in about a month.)

    With this new information in hand, I have been able to determine that the Adjusted RPI ratings and rankings that the NCAA staff provided to the Division I Women’s Soccer Committee for both the 2010 and 2011 seasons, for use in the Committee’s NCAA Tournament at large selections and seeding, in fact were incorrect. This appears to be due to a programming error that the NCAA staff made in the middle of the 2010 season and that carried forward into the 2011 season.

    The only possible way I could be wrong is if the Women’s Soccer Committee, prior to the 2010 season, authorized significant changes from the bonus and penalty adjustments used in previous seasons. I have been able to find no record of the Committee authorizing significant changes, nor have I found any record of the Championships/Sports Management Cabinet having approved changes, which I believe would have been required. Further, I asked a Committee member about whether the Committee had approved any changes and the member had “no knowledge regarding any changes.” In addition, I have communicated what happened (set out below) to the NCAA staff and to the Women’s Soccer Committee and have had no response from them, including no claim that the Committee approved the changes.

    Here is the explanation (slightly modified) that I provided to the NCAA staff and the Women’s Soccer Committee of the error, how it occurred, how it affected the end-of-season rankings, and how the NCAA could avoid similar errors in the future.

    What Was the Error?

    As you know, the NCAA first computes the “Normal RPI” (I call it the unadjusted RPI) and then computes the Adjusted RPI. The adjustments are bonuses for good wins and ties and penalties for poor losses and ties. The error was in the amounts of the bonus and penalty adjustments.

    In 2009, the NCAA’s RPI formula used the following bonus and penalty amounts:

    2009

    Code:
    CATEGORY	AWAY	        NEUTRAL	        HOME
    Win v 1-40	0.0032	        0.003	        0.0028
    Win v 41-80	0.0018	        0.0016	        0.0014
    Tie v 1-40	0.0016	        0.0014	        0.0012
    Tie v 41-80	0.0012	        0.001	        0.0008
    Tie v 135-205	-0.0008	        -0.001	        -0.0012
    Tie v 206-322	-0.0012	        -0.0014	        -0.0016
    Loss v 135-205	-0.0014	        -0.0016	        -0.0018
    Loss v 206-322	-0.0028	        -0.003	        -0.0032
    I can be certain that the NCAA used the above amounts in 2009 because, using my RPI computation system, they produce adjustment totals going from the unadjusted to the adjusted RPI that in every case match the NCAA’s adjustment totals. This would not be possible unless these are the right amounts.

    In 2010, the NCAA started out using these same bonus and penalty amounts, which were the basis for the first RPI report it issued on October 5, 2010. I know this because for that RPI report, my RPI rankings matched the NCAA’s exactly.

    Some time during the week after releasing the October 5 RPI report, the NCAA staff mistakenly changed the bonus and penalty amounts. As a result, the computations for the subsequent reports were based on different bonus and penalty amounts. These changed – and erroneous – bonus and penalty amounts were as follows:

    2010

    Code:
    CATEGORY	AWAY	        NEUTRAL	        HOME
    Win v 1-40	0.0026	        0.0024	        0.0022
    Win v 41-80	0.002	        0.0018	        0.0015
    Tie v 1-40	0.0013	        0.0011	        0.0009
    Tie v 41-80	0.0007	        0.0004	        0.0002
    Tie v 135-205	0.0002	        -0.0004	        -0.0007
    Tie v 206-322	-0.0009	        -0.0011	        -0.0013
    Loss v 135-205	-0.0015	        -0.0018	        -0.002
    Loss v 206-322	-0.0022	        -0.0024	        -0.0026
    I know these were the amounts the NCAA changed to because, just as for 2009, when programmed into my RPI computation system, they produce adjustment totals going from the unadjusted to the adjusted RPI that in every case match the NCAA’s adjustment totals. This would not be possible unless these are the right amounts. As an extra precaution, however, I also tested the above adjustment amounts against the data underlying the NCAA’s October 5, 2010 RPI report, to be sure I am right that these adjustment amounts came into play only after the NCAA issued that report. I am right: These new adjustment amounts do not produce the rankings the NCAA had in its October 5 report. Rather, only the 2009 adjustment amounts produce those rankings. Thus in mid-season 2010, the NCAA staff changed the adjustment amounts to those in the table immediately above.

    If you will look at the amounts of the bonus and penalty awards in relation to the categories and compare them to the 2009 amounts, you will see that this represented a major change in the overall structure of how values are awarded. In particular:

    Bonus amounts for wins against the top 40 teams went from 0.0032 (away) – 0.0030 (neutral) – 0.0028 (home) to 0.0026 (away) – 0.0024 (neutral) – 0.0022 (home), thus significantly reducing the bonuses for good wins.

    Bonus amounts for wins against teams 41-80, in the correct 2009 formula, were 0.0018 (away) – 0.0016 (neutral) – 0.0014 (home), significantly less than the bonus amounts for wins against top 40 teams. In the incorrect 2010 formula, however, the amounts for wins against teams 41-80 were 0.0020 (away) – 0.0018 (neutral) – 0.0015 (ties), only slightly less than the bonus amounts for wins against top 40 teams. Thus the 2010 error changed the overall structure of the bonus amounts for wins from one in 2009 that strongly emphasized wins against teams in the top 40, as compared to wins against teams 41-80, to one that only slightly favored wins against teams in the top 40.

    Bonus amounts for ties against teams in the top 40, in the correct 2009 formula, overlapped and were almost the same as the bonus amounts for wins against teams in the 41-80 range. On the other hand, in the incorrect 2010 formula, the bonus amounts for ties against teams in the top 40 all were less than all of the bonus amounts for wins against teams in the 41-80 range. Thus here too, the 2010 error deleted the correct 2009 formula’s strong emphasis on good results, in this case ties, against teams in the top 40 range.

    In a further but less significant error, the new formula, rather than imposing a 0.0002 penalty for away ties against teams ranked 135-205 as it clearly should have, mistakenly awarded a 0.0002 bonus.​

    In 2011, the NCAA continued with the incorrect mid-stream 2010 amounts, but with a correction from a bonus to a penalty for away ties against teams rated 135-205, and with the amounts of the awards slightly adjusted. Notwithstanding the slight adjustments, however, the basic format of the 2011 awards matched the format of the 2010 awards, thus continuing the elimination of the correct 2009 and early 2010 heavy emphasis on wins and ties against teams ranked 1-40. The 2011 amounts were as follows:

    2011

    Code:
    CATEGORY	AWAY	        NEUTRAL	        HOME
    Win v 1-40	0.0024	        0.0022	        0.002
    Win v 41-80	0.0018	        0.0016	        0.0014
    Tie v 1-40	0.0012	        0.001	        0.0008
    Tie v 41-80	0.0006	        0.0004	        0.0002
    Tie v 135-205	-0.0002	        -0.0004	        -0.0006
    Tie v 206-322	-0.0008	        -0.001	        -0.0012
    Loss v 135-205	-0.0014	        -0.0016	        -0.0018
    Loss v 206-322	-0.002	        -0.0022	        -0.0024
    Here too, I know these are the amounts the NCAA used in 2011 because when programmed into my RPI computation system, they produce adjustment totals going from the unadjusted to the adjusted RPI that in every case match the NCAA’s adjustment totals.

    The NCAA staff does not have the authority to make significant changes in the overall structure of the bonus and penalty amounts. All such changes are subject to the approval of the Women’s Soccer Committee (and I believe also are subject to the approval of the Championships/Sports Management Cabinet), and there is no evidence I have been able to find of such an approval nor has the NCAA staff or Women’s Soccer Committee advised me of any such approval. Thus I feel confident in saying that the bonus and penalty amounts ultimately used in 2010 and 2011, together with the ratings and rankings that they produced and that the Committee used in its Tournament bracket formation process, were unauthorized and incorrect.

    (The NCAA staff has advised me that there are some very minor changes to the bonus and penalty amounts [in the range of 0.0001] that can be made automatically to properly calibrate the amounts in relation to expected overall ratings. The amounts of the changes from the correct 2009 amounts to the incorrect 2010 and 2011 amounts, however, are not those kinds of changes.)

    How Did the Error Occur?

    The error occurred during the two week period after October 5, 2010. During the first week, the NCAA staff person responsible for the RPI for Division 1 Women’s Soccer was conducting some experiments with the RPI. I do not know the details of the experiments, but they apparently included testing different bonus and penalty amounts. On completing the experiments, however, the staff person forgot to reinstall the correct bonus and penalty amounts. This resulted in the October 12, 2010 RPI report having incorrect rankings. At that time, knowing my rankings had matched the NCAA’s rankings for its October 5 RPI report and should have matched their rankings for the October 12 report, I advised the NCAA staff person that there was a problem with the October 12 report. He realized he had forgotten to reinstall the correct bonus and penalty amounts and advised he would reinstall them for upcoming reports. When the October 19 report came out, however, it again had incorrect ratings. I again advised the NCAA staff person of this. In response, he advised me that he had “moved the numbers back to the original origin” and that “I don’t know.” In fact, however, as I now have shown, he did not move the numbers back to the “original origin” but instead installed the incorrect 2010 numbers.

    How Did the Error Affect the End-of-Season Rankings?

    I will set out below how the error affected the end-of-regular-season rankings for both 2010 and 2011. These are the rankings the Committee used in the 2010 and 2011 Tournament bracket formation processes. Fortunately, it appears the NCAA staff got very lucky, as it seems doubtful the error affected the at large selections in either year although it slightly altered the rankings of teams within the “bubble.” It is not as clear as to seeding, but I’m guessing that the errors would not have affected the seeding either, at least not significantly.

    The tables below show what the rankings should have been for each year for the top 60 teams, using the correct bonus and penalty amounts, compared to what they were using the incorrect amounts installed by the NCAA staff. Where there are differences, I have noted them in bold face:

    2011 CORRECT RANK/2011 NCAA INCORRECT RANK/TEAM

    1 1 Duke
    2 2 Stanford
    3 3 WakeForest
    4 4 VirginiaU
    5 5 Memphis
    6 6 FloridaState
    7 7 OklahomaState
    8 8 FloridaU
    9 9 UCLA
    10 10 Pepperdine
    11 11 TexasA&M
    12 12 NorthCarolinaU
    13 15 Auburn
    14 13 PennState
    15 14 Baylor

    16 16 BostonCollege
    17 17 WestVirginiaU
    18 18 SantaClara
    19 20 IllinoisU
    20 19 Milwaukee
    21 22 UCIrvine
    22 21 Dayton

    23 23 Marquette
    24 25 TennesseeU
    25 24 KentuckyU
    26 27 MarylandU
    27 26 UCF

    28 28 MiamiFL
    29 29 SanDiegoU
    30 32 VirginiaTech
    31 31 LongBeachState
    32 30 BostonU
    33 35 LSU
    34 34 William&Mary
    35 37 SouthCarolinaU
    36 33 LaSalle
    37 36 OregonState
    38 38 Louisville

    39 39 KansasU
    40 40 CaliforniaU
    41 43 AlabamaU
    42 41 NotreDame
    43 42 Georgetown
    44 46 GeorgiaU

    45 45 NCState
    46 44 StephenFAustin
    47 47 MassachusettsU
    48 49 PortlandU
    49 48 Richmond
    50 51 OhioState
    51 50 TexasU
    52 54 WashingtonState

    53 53 CentralMichigan
    54 52 Harvard
    55 55 BYU
    56 56 Denver
    57 57 MichiganState
    58 59 Samford
    59 58 UtahState

    60 60 MissouriU

    2010 CORRECT RANK/2010 NCAA INCORRECT RANK/TEAM

    1 1 Stanford
    2 2 NorthCarolinaU
    3 3 PortlandU
    4 4 FloridaU
    5 6 MarylandU
    6 5 OklahomaState

    7 7 BostonCollege
    8 8 VirginiaU
    9 9 NotreDame
    10 11 Marquette
    11 10 WestVirginiaU

    12 12 OhioState
    13 13 FloridaState
    14 14 TexasA&M
    15 15 WakeForest
    16 17 UNCGreensboro
    17 16 Hofstra

    18 18 UCF
    19 20 UCIrvine
    20 19 Georgetown
    21 23 UCLA
    22 21 Memphis
    23 22 WisconsinU

    24 24 Dayton
    25 25 ArizonaState
    26 26 SantaClara
    27 28 OklahomaU
    28 27 SouthernCalifornia

    29 29 OregonState
    30 30 Duke
    31 31 BYU
    32 34 SouthFlorida
    33 32 SouthCarolinaU
    34 33 NewMexicoU
    35 36 TexasU
    36 35 IllinoisU

    37 37 CaliforniaU
    38 38 MinnesotaU
    39 39 ConnecticutU
    40 40 SanDiegoU
    41 41 LongBeachState
    42 42 Denver
    43 43 VirginiaTech
    44 46 PennState
    45 45 MichiganU
    46 44 BostonU
    47 47 WashingtonU
    48 48 GeorgiaU
    49 49 MiamiFL
    50 50 JamesMadison
    51 51 Toledo
    52 52 TexasTech
    53 53 SMU
    54 54 Auburn
    55 55 NebraskaU
    56 58 LoyolaMarymount
    57 56 Siena
    58 59 Milwaukee
    59 57 CentralMichigan

    60 60 Baylor

    How to Avoid Similar Errors in the Future

    I could have told the NCAA staff exactly what the problem was at some point during the 2010 season, if I had had access to the NCAA’s Normal RPI and Adjusted RPI ratings on which the initial erroneous NCAA RPI reports were based (the October 12 and 19 reports). This would have taken considerable work, but I definitely could and would have identified the exact problem before the end of the season. Even better, I could have told the NCAA staff immediately on issuance of the October 12 and 19 reports exactly what the problem was if the NCAA also, in advance of the 2010 season, had made the bonus and penalty amounts public.

    Problems like this inevitably are going to occur now and then, notwithstanding the best efforts of the NCAA staff. Two changes will make it much more likely future errors will be caught quickly so they can be corrected before they affect the at large selection and seeding process: (1) In the regular weekly NCAA RPI reports released publicly during the season, include the Normal RPI and Adjusted RPI ratings; and (2) In the Pre-Championship Manual for each season, include the bonus and penalty amounts that will be applicable for that season. Whether the NCAA will be willing to do either of these things remains to be seen.
     
  2. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: NACC RPI Error Affected 2010 and 2011 Ratings

    Moderator, I goofed on the title to this thread. Can you change NACC to NCAA?
     
  3. GoCourage

    GoCourage Member

    May 27, 2001
    Durham, NC
    It is surprising and unfortunate to me that no one has picked this up.

    Maybe the length of the post put people off from reading it or maybe people don't care so much about whether the NCAA is held responsible for the accuracy of the information that they publish... I guess that shouldn't be so strange for an organization that holds it's member institutions up to a secret set of selection criteria.

    Oh well...
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: NACC RPI Error Affected 2010 and 2011 Ratings

    GoCourage has motivated me to summarize the initial post on this thread:

    Part way through last year's season -- 2010 -- the NCAA staff, in the course of conducting some RPI experiments, changed the RPI bonus and penalty adjustment formula. They forgot to change it back. When I pointed this out to them, they said they would restore the original formula. But, they did not restore the original formula and weren't able to figure out what the problem was.

    This problem continued throughout the 2010 season and resulted in the Women's Soccer Committee having the incorrect RPI ratings and rankings when they set the 2010 bracket.

    The problem also continued into the 2011 season, notwithstanding my advising the NCAA staff that the problem was ongoing. They never managed to figure out what the problem was (and sort of tried to deny there was a problem), so the problem also resulted in the Committee having the incorrect ratings and rankings when they set the 2011 bracket.

    I'm still trying to get the staff and Committee to address and correct the problem. The best response I can get from them, so far, is that the Committee will be discussing the RPI at its January meeting.

    (I'm being very nice here, in my characterization of the responses I've gotten from the NCAA staff and from the Committee.)

    If anyone knows any of the members of the Committee, it would be great if you would send them an email, something to the effect: "I understand that the NCAA staff created a programming error in the RPI bonus and penalty adjustment formula. Can you tell me about what the problem is and give me some assurance it will be corrected?"

    Here are the current members of the Women's Soccer Committee and their email addresses. Meredith Jenkins, from Auburn, is the chair:


    Central Region: Tim Hickman (Big Twelve Conference, Missouri, administrator, FBS): hickmantl@missouri.edu

    Great Lakes Region: Matt Wolfert (Mid-American Conference, Ball State, administrator, FBS): mwolfert@bsu.edu

    Mid-Atlantic Region: Vicky Chun (Patriot League, Colgate, administrator, FCS/Division I): vchun@colgate.edu

    Northeast Region: Mike Parsons (Big East Conference, West Virginia, administrator, FBS): mike.parsons@mail.wvu.edu

    Pacific Region: Mike Friesen (Mountain West Conference, San Diego State, coach, FBS): wsoccer@mail.sdsu.edu

    South Region: Meredith Jenkins, chair (SEC, Auburn, administrator, FBS): heinsml@auburn.edu

    Southeast Region: Julie Orlowski (Atlantic Sun Conference, Stetson, coach, FCS/Division I): jorlowsk@stetson.edu

    West Region: Ada Greenwood (West Coast Conference, San Diego, coach, FCS/Division I): hadriang@sandiego.edu

    At Large: Beth Goetz (Horizon League, Butler, administrator, FCS/Division I): bgoetz@butler.edu

    At Large: David Hansen (Conference USA, UCF, administrator, FBS): dhansen@athletics.ucf.edu
     
  5. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Re: NACC RPI Error Affected 2010 and 2011 Ratings

    Just curious... Were any of the schools of the members of the committe affected?
     
  6. thisisstupid

    thisisstupid Member

    Oct 31, 2003
    Re: NACC RPI Error Affected 2010 and 2011 Ratings

    The NCAA eliminated bonus points for wins & ties at home following the 2009 season.You can only get bonus points away and at neutral sites.Chances are the values were also changed.
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Re: NACC RPI Error Affected 2010 and 2011 Ratings

    I see that you mostly post regarding men's soccer. This may be true for some other sport, including DI men's soccer for which I don't track the RPI, but it's not true for Division I women's soccer. I have verified this through my own calculations, which are exactly correct, and also through the NCAA staff and the DI Women's Soccer Committee.

    Each sport has its own variant. Some have no adjustments. Some have more complex adjustments, some less complex. The adjustments for DI women's soccer have been as I set out.
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Although the NCAA refuses to own up to its programming error with the RPI for Division I women's soccer, they advise that "the [women's soccer] committee will review all aspects of the RPI at its annual meeting in late January." It sure would be nice to know exactly what they're going to be talking about, as it could be very important, but in recent years it has been virtually impossible to gain access to the committee's agendas or minutes.

    The real issue the Committee needs to address is the RPI's problem rating teams within a unified national system. This is a very serious problem for Division I women's soccer, because there is a very strong regional imbalance. Also, the regions are "structured" differently -- some have a relatively high level of parity (West in particular) and some have relatively little parity (Southwest, for example), which also contributes to the problem. This also is a problem for conferences, although not to the same extent. (The non-conference RPI does not solve the conference problem.)

    There has been a trend, in recent years, to take game sites into account in the RPI formula. This ordinarily is accompanied by getting rid of the bonus and penalty system for good wins/ties and poor losses/ties. In my opinion, if the Committee were to go this route, it would be a truly terrible mistake. The bonus/penalty system and the "non-consideration" of game sites actually helps mitigate the regions and conferences problem. They don't fix the problem, but if the Committee were to follow the trend, the problem would become even worse.

    To give some perspective, the "unfairness" caused by the regions and conferences problem is 3 to 4 times as great as the countervailing "unfairness" caused by not taking game sites into account.
     
  9. socdad

    socdad Member

    Nov 9, 2011
    Dayton, Oh
    Not exactly ‘on topic’ but I’m wondering if there would be any impact on the conference rankings (A Sun / Big South) if Massey would list Campbell in the Big South Conference. They switched from the A Sun to the BSC last season and it looks like Massey still list CU in the a ‘ASC’
     
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I missed that switch too, as did nc-soccer. I can't tell you what difference it would have made for Massey's conference rankings to have Campbell in the Big South Conference. For the Adjusted RPI, however, with Campbell identified as still in the Atlantic Sun Conference, I had the Atlantic Sun ranked 22 and the Big South ranked 27. With a correction placing Campbell in the Big South, the Atlantic Sun Conference drops from 22 to 23, with the Missouri Valley Conference moving into the 22 spot. The Big South Conference remains at 27, although much closer to 26. This all is for the end of regular season (including conference tournaments) rankings.

    I see that for his post NCAA Tournament rankings, Massey has the Atlantic Sun Conference at 23 (I'm excluding his Independent group and total NCAA group) and the Big South Conference at 30. I doubt the Campbell switch would have made much difference, if any.
     
  11. socdad

    socdad Member

    Nov 9, 2011
    Dayton, Oh
    Thanks for the info ...
     

Share This Page