Here is my 2024 Report 5, with updated end-of-season rank and at large selection predictions, incorporating the actual results of games played through Sunday, September 8. Next week, the NCAA should publish its first ratings for the season at the RPI Archive. At that point, we hopefully will know for sure whether the changed valuing of ties by the NCAA RPI formula will be in effect and also whether the new bonus and penalty system will be in effect and what the bonus and penalty amounts will be.
It is for real, there have been changes to the RPI formula: 1. In calculating Winning Percentage, the value of a tie has been changed from half a win to one third of a win. 2. Oddly, in calculating Opponents' Winning Percentage (and Opponents' Opponents' Winning Percentage), the value of a tie remains at half a win. I believe this is an oversight, but maybe not. Anyway, I have advised the Women's Soccer Committee and NCAA staff that this appears to have been an oversight, so we will see .... 3. The Bonus and Penalty adjustment regime has changed. Here is the new regime: a. Tier 1 bonuses for wins and ties against teams with unadjusted RPI ranks of 1 to 25: Win Away 0.0032 Win Neutral 0.0030 Win Home 0.0028 Tie Away 0.0020 Tie Neutral 0.0018 Tie Home 0.0016 b. Tier 2 bonuses for wins and ties against teams with unadjusted RPI ranks of 26 to 50: Win Away 0.0026 Win Neutral 0.0024 Win Home 0.0022 Tie Away 0.0014 Tie Neutral 0.0012 Tie Home 0.0010 c. Tier 3 bonuses for wins against teams with unadjusted RPI ranks of 51 to 100: Win Away 0.0008 Win Neutral 0.0006 Win Home 0.0004 d. Tier 4 penalties for ties and losses against teams with unadjusted RPI ranks of 151 to 250: Loss Home -0.0014 Loss Neutral -0.0012 Loss Away -0.0010 Tie Home -0.0008 Tie Neutral -0.0006 Tie Away -0.0004 e. Tier 5 penalties for ties and losses against teams with unadjusted RPI ranks of 251 and poorer: Loss Home -0.0026 Loss Neutral -0.0024 Loss Away -0.0022 Tie Home -0.0020 Tie Neutral -0.0018 Tie Away -0.0016 An important question not yet answered publicly is whether the 0.500 minimum winning percentage requirement to get an at large position in the NCAA Tournament will value a tie as half a win or a third of a win. This is an important question as to which coaches with NCAA Tournament at large aspirations need an answer.
And, I have clarification: Per advice from the NCAA, the changed weight of ties from a half to a third will apply only in calculating RPI Winning Percentage. It will not apply in calculating Opponents' Winning Percentage and Opponents' Opponents' Winning Percentage, for which ties will continue to be weighted at a half. And, in calculating winning percentage for purposes of the 0.500 winning percentage requirement to qualify for an NCAA Tournament at large position, ties will continue to be weighted at a half.
Thanks for doing the leg work here. Did they share their rationale for not applying the new weight to the other two calculations?
Here is my 2024 Report 6: Repeating the information above on the now confirmed changes to the NCAA RPI formula; Providing current actual RPI information based on games played through September 15; and Providing updated end-of-season rank and at large selection predictions, incorporating the actual results of games played through September 15 and predicted results of games not yet played. In predicting the results of games not yet played, I have shifted from using teams' assigned pre-season ratings to using their actual current NCAA RPI ratings. It is possible that using actual current NCAA RPI ratings will make the predictions even more speculative than if I had continued using the assigned pre-season ratings. Over the coming weeks, however, using current NCAA RPI ratings should become increasingly reliable. No, the NCAA staff did not explain why they are not applying the new tie weight to the "Strength of Schedule" (Opponents' Winning Percentage and Opponents' Opponents' Winning Percentage) calculations. Over the coming days, I will analyze how this year's changes will affect the NCAA RPI, as compared to some earlier versions, and also as compared to my Balanced RPI. I want to see how the changes over time -- from the original unadjusted RPI to the previous bonus and penalty regime to moving to no overtimes to reducing the effective weight of ties to adding the new bonus and penalty regime -- have affected the RPI as a rating system and have affected the ability of coaches to "trick" the system through strategic scheduling.
Here is my 2024 Report 7 with current actual RPI ratings, ranks, and other information for games played through Sunday, September 22 as well as predicted end-of-season ratings, ranks, and other information and NCAA Tournament at large prospects based on the actual results of games played through September 22 and predicted results of games not yet played. Coming soon: a detailed analysis of the 2024 RPI formula changes.
I have done a detailed report on the effects of the 2022 No Overtime rule and the 2024 RPI Formula changes. The report shows the likely extent of effects on NCAA Tournament at large selections, which will be significant overall, with most of the effects due to the No Overtime rule, with a small number of effects due to the 2024 change of tie values to 1/3 when computing RPI Winning Percentage, and with a very small number of effects due to the new bonus and penalty regime. The report also shows how these changes will affect how the RPI functions as a rating system. You can check it out here: 2024 Report 8: Effects of the 2022 No Overtime Rule and the 2024 Changes to the RPI Formula.
Here is my 2024 Report 9 with current actual RPI ratings, ranks, and other information for games played through Sunday, September 29 as well as predicted end-of-season ratings, ranks, and other information and NCAA Tournament at large prospects based on the actual results of games played through September 29 and predicted results of games not yet played. The predicted results are based on teams' current NCAA RPI ratings based on games played through September 29. As a cautionary note, the current actual RPI ratings show, according to the RPI, what teams have demonstrated through their game results in the season so far. They do not show how good teams are, which can be something quite different.
I have not looked closely, but you have to consider several things: Rutgers has an excellent record, so they helped Penn State's strength of schedule; How did the teams it's already played do? If they did well overall, then that too helped PSU's strength of schedule; and How did the teams that previously were ranked ahead of PSU do? If they had poor results or the teams they already played had poor results, then it may be partly a matter of them dropping down rather than PSU jumping up. I.e., there are lots of moving parts.
Rutgers is a good team, currently 27 RPI, but does not seem like a tie would help that much. The best wins Penn State has per RPI are 26 West Virginia and 37 Texas Tech. They have lost to Iowa, Michigan State, and Virginia They jumped Duke, Iowa, and USC. Duke beat 74 Louisville. Their only loss is to Ohio State. They hold wins over North Carolina and Virginia. Iowa beat 56 Indiana. They have 3 ties but no losses. They hold wins over Wake Forest and Penn State. USC beat Oregon. Their only loss is to Stanford.
Within that cautionary note, who would you say are the good teams? Which schools would be in your top10? Thank you for all you do.
By referring to teams' ranks, in terms of what teams should get credit for in the Strength of Schedule portion of their RPI ratings, you are making an understandable mistake ... Yes, one would think that if you play a team ranked X, then you would get credit in the Strength of Schedule portion of your RPI for playing a team ranked X. Unfortunately, you don't! Using Penn State, Duke, Iowa, and USC as examples, since those are the ones involved in your question: Penn State played Rutgers. Rutgers' RPI rank is 27 and their RPI Strength of Schedule contributor rank is 29. That's a reasonable difference. Duke played Louisville. Louisville's RPI rank is 74 and their Strength of Schedule contributor rank is 111. Oops! Iowa played Indiana. Indiana's RPI rank is 56 and their Strength of Schedule contributor rank is 113. Double Oops! USC played Oregon. Oregon's RPI rank is 134 and their Strength of Schedule contributor rank is 202. Triple Oops! Also, consider this, all again according to the RPI: The average rank of Penn State's opponents has been 44, with an RPI Strength of Schedule contributor average rank of 80. The two numbers for Duke are 79 and 129. The two numbers for Iowa are 77 and 113. The two numbers for USC are 79 and 104. So, Penn State is getting credit for having played a significantly more difficult schedule than the other three teams, whether you are looking at their RPI ranks or their RPI Strength of Schedule contributor ranks. You have focused on the good results of the teams. That is valid, but is something the RPI only minimally addresses. Instead, the RPI in general considers a team's entire record and doesn't focus on its good results. There is an exception to that, which is that it assigns bonuses for good wins and ties. The bonuses break "good" down into three groups: opponents ranked 1-25, 26-50, and 51-100. The best bonuses are for good results against the top group, and so on. However, the bonuses are not very big and, as I have demonstrated at the RPI and Bracketology for Division I Women's Soccer blog, do not have much effect on rankings. On the other hand, when the Committee gets to selecting and seeding teams in the NCAA Tournament, it looks at specifics in teams' records and will consider the kinds of results you are referring to. The reason why there is such a divergence between team RPI ranks and their ranks as RPI strength of schedule contributors is the way the RPI computes Strength of Schedule. In essence, RPI Strength of Schedule is 80% the opponent's winning record and 20% its opponents' winning records. In other words, it really matters what your opponent's winning record is but doesn't matter very much against whom the opponent achieved that winning record. To give you a current example of what can happen because of this, Canisius' current RPI rank is 119 but their RPI Strength of Schedule contributor rank is 26! As a contrast, BYU's RPI rank is 33, but its RPI Strength of Schedule contributor rank is 106! Yes, the RPI Strength of Schedule ranks think Canisius is 80 rank positions better than BYU. And, if you are wondering how important a team's Winning Percentage is in the RPI formula as compared to its Strength of Schedule, each has a 50% effective weight. This RPI defect has a number of negative ramifications, including that those coaches who really understand how it works can "game" it through smart non-conference scheduling -- play top teams from mid-majors, which ordinarily will have good winning records though against relatively weak opponents, and don't play middle-of-the-road teams from power conferences, which ordinarily will have middling winning records against strong opponents. The ability to game the RPI in this way (and for conferences as a whole to "game" it through cooperative non-conference scheduling) is the reason why the NCAA dumped it for basketball. Because of all this, one must be careful not to assign too much credibility to the NCAA RPI. It has issues ....
I don't know who is in the Top 10, especially since we do not have enough data yet to know. But here are the current Top 10 according to my Balanced RPI, which uses the RPI general structure but with modifications and additional calculations so that teams' Balanced RPI ranks and their Balanced RPI Strength of Schedule contributor ranks are essentially the same: NorthCarolinaU Duke Stanford IowaU SouthernCalifornia MississippiState WakeForest ArkansasU MichiganState UtahState For a statistical rating system really to work it needs teams to have played about 30 games. We are not close to that, so I would not put a lot of credence in this group although it doesn't look too bad. Another place you could go is to Kenneth Massey's ratings. At this stage of the season, his system offsets the lack of current season data by incorporating data from the last few seasons, with the relative value of the incorporated data waning as the current season progresses. North Carolina Arkansas Duke Stanford Michigan State Auburn Mississippi State Utah State Notre Dame Florida State Southern California Iowa I see that there is a pretty good match between my Balanced RPI and Massey. FYI, the Women's Soccer Committee has requested authority to use Massey as an additional rating system next year.
Here is my 2024 Report 10 with current actual RPI ratings, ranks, and other information for games played through Sunday, October 6 as well as predicted end-of-season ratings, ranks, and other information and NCAA Tournament at large prospects based on the actual results of games played through October 6 and predicted results of games not yet played. The predicted results are based on teams' current NCAA RPI ratings based on games played through October 6. For the first time this season, the report includes teams' NCAA Tournament #1 through 4 seed prospects in addition to their at large prospects.
Here is my 2024 Report 11 with current actual RPI ratings, ranks, and other information, and teams currently in the candidate pools for NCAA Tournament seeds and at large selections, based on games played through Sunday, October 13. It also has predicted end-of-season ratings, ranks, and other information. And, it has predicted NCAA Tournament #1 through 8 seeds and at large selections. The predictions are based on the actual results of games played through October 13 and predicted results of games not yet played. The predicted results are based on teams' current NCAA RPI ratings from games played through October 13. At the end, the report also shows teams predicted to get at large positions if the Committee were using the Balanced RPI rather than the NCAA RPI and which teams they would bump out.
As I have shown, a significant NCAA RPI problem is that teams' RPI ranks can be very different than the ranks the RPI formula assigns when computing what they contribute to their opponents' RPI ranks. This is because the NCAA computes the RPI formula as follows: (Winning Percentage + 2* Average of Opponents' Winning Percentages + Average of Opponents' Opponents' Winning Percentages)/4 In the formula, the OWP and OOWP numbers together are the team's Strength of Schedule. So although the team's RPI rank is based on the above formula, when it is an opponent of another team its contribution to the opponent's Strength of Schedule is: (2* Winning Percentage + Average of Opponents' Winning Percentages)/4 In other words, if I play a team, I am playing a team with a rank based on the first formula above -- the RPI formula -- but within my own RPI, I get credit for playing a team with a rank based on the second formula above -- the Strength of Schedule formula. Further, in terms of effective weights of the elements of the two formulas, they convert to: My RPI: 50% My Winning Percentage, 40% Average of My Opponents' Winning Percentages, 10% Average of My Opponents' Opponents' Winning Percentages My Strength of Schedule Contribution: 80% My Winning Percentage, 20% Average of My Opponents' Winning Percentages. As a result of these differences, my RPI rank can be very different than my rank as a Strength of Schedule contributor to my opponents. I decided to run a check, at this stage of the season, on how these differences affect teams from the different conferences and also from the four geographic regions into which I divide teams based on their scheduling habits. The first table below shows the average difference, by conference, between teams opponents' RPI ranks and their opponents' ranks as Strength of Schedule contributors: In the table, a negative number on the right means that opponents' average ranks as Strength of Schedule contributors are poorer than their actual RPI ranks. Effectively, the conference's teams are getting credited with playing teams weaker than the RPI says the teams actually are. Conversely, a positive number means the conference's teams are getting credited with playing teams stronger than the RPI says the teams actually are. As you can see, this RPI problem can be quite stunning: As shown at the top of the table, the Big 10's teams on average are getting credited with playing opponents that are ~30 positions weaker than their actual RPI strength. Other strong conferences also are being discriminated against by the RPI, but the Big 10 suffers by far the most. Likewise, teams from the West are discriminated against to the tune of getting credited, on average, with playing opponents that are ~10 positions weaker than their actual RPI strength. The RPI formula treats teams from the Middle about right. It discriminates in favor of teams from the North and from the South the most. This regional discrimination is the result of two factors: (1) Just as for stronger conferences, the RPI Strength of Schedule problem discriminates against stronger regions -- the West region's teams have the highest average RPI ratings. (2) Because the Strength of Schedule computation is heavily weighted in favor of playing opponents with high winning percentages, it discriminates against regions with high levels of parity in which there are fewer teams with high winning percentages. The West has the highest level of parity and the South the lowest. From an NCAA Tournament perspective, one of the problems this causes is that some teams look good to the Committee because the Strength of Schedule elements of their RPIs are making their overall RPI ranks look better than they should and some teams look poorer to the Committee because the Strength of Schedule elements of their RPIs are making their overall RPI ranks look poorer than they should. As you can see from the current numbers in the tables, these RPI misrepresentations can be significant.
Here is my 2024 Report 12 with current actual RPI ratings, ranks, and other information, and teams currently in the candidate pools for NCAA Tournament seeds and at large selections, based on games played through Sunday, October 13. It also has predicted end-of-season ratings, ranks, and other information. And, it has predicted NCAA Tournament #1 through 8 seeds and at large selections. The predictions are based on the actual results of games played through October 20 and predicted results of games not yet played. The predicted results are based on teams' current NCAA RPI ratings from games played through October 20. At the end, the report also shows new teams predicted to get at large positions if the Committee were using the Balanced RPI rather than the NCAA RPI and which teams they would bump out.
I didn’t want to start another thread for this and this one seemed the most appropriate since it is a "Ratings and Ranks" thread. The NCAA Division I Women’s Soccer Committee has revealed its top 16 teams through the games of October 16. 1. Duke (11-1-0) 2. North Carolina (13-2-0) 3. Wake Forest (9-2-2) 4. Mississippi St. (12-1-0) 5. Arkansas (11-1-1) 6. Iowa (11-1-3) 7. Stanford (12-2-1) 8. Penn St. (11-3-2) 9. Auburn (12-1-2) 10. Michigan St. (9-1-5) 11. Southern California (11-1-2) 12. UCLA (12-2-2) 13. Ohio St. (10-2-3) 14. TCU (11-2-2) 15. Notre Dame (8-1-3) 16. Pepperdine (9-2-3) https://www.ncaa.com/news/soccer-wo...sion-i-womens-soccer-committee-reveals-top-16
Not too bad a list for as of October 16. Since then, Ohio State lost to Southern California and UCLA, both away, both 1-0. Pepperdine lost to Loyola Marymount. TCU won against UCF and Iowa State. I mention them because they are not on my system's Top 16. Instead my Top 16 includes Florida State (blow out wins over Virginia and Pittsburgh), Vanderbilt (win over Florida), and South Carolina (win over Alabama). I easily can see TCU in the Top 16. I'm thinking Pepperdine has dropped out. It's possible Ohio State also has dropped out although their two losses really should not detract from their profile.
How about Minnesota schedule. At USC tied, at UCLA lost 3-1 , at Ohio State won 3-2, at Penn St lost 3-2 and now Iowa Sunday.
I've been wondering how the absorption of the Pac 12 into, and other expansion of, the now Power 4 conferences, coupled with the 2022 change to no overtimes and the 2024 RPI formula changes will affect how many of the Power 4 teams end up in the historic Top 57 candidate group for NCAA Tournament at large positions. We only will have this year's data to do a comparison with past history, which is not a big enough sample to get an authoritative answer, but the numbers so far suggest the effect will be minimal. Looking at the teams currently in the Power 4, those teams historically (since 2007) have averaged 36.9 teams in the end-of-season Top 57. Based on my end-of-season rank projections for this year, the number this year will be right around 36. This doesn't mean that the changes won't affect the Committee selections. We will have to wait to see if it looks like there will be any difference from the past. But for now, it looks like business as usual.
Following up on the previous post, here are the conferences' historic average and projected 2024 numbers of teams in the Top 57. Reminder: These are based on the teams that currently are in each conference.
Here is my 2024 Report 13 with current actual RPI ratings, ranks, and other information, and teams currently in the candidate pools for NCAA Tournament seeds and at large selections, based on games played through Sunday, October 27. It also has predicted end-of-season ratings, ranks, and other information. And, it has predicted NCAA Tournament #1 through 8 seeds and at large selections. The predictions are based on the actual results of games played through October 27 and predicted results of games not yet played. The predicted results are based on teams' current NCAA RPI ratings from games played through October 27. At the end, the report also shows new teams predicted to get at large positions if the Committee were using the Balanced RPI (Cal, Tennessee, and Colorado) rather than the NCAA RPI and which teams they would bump out (Oklahoma State, Memphis, and Buffalo). It also shows teams that would drop out of the NCAA RPI Top 57 candidate pool if using the Balanced RPI (Fairfield, Liberty, South Florida, Dayton, James Madison, Massachusetts, Columbia, Army, Buffalo, and Texas A&M [TAMU predicted to end up below 0.500]} and teams that would replace them in the pool (Cal, Colorado, Loyola Marymount, Tennessee, Illinois, UC Davis, Baylor, UConn, Kansas, and Utah). You may notice that the three teams likely to get at large spots under the Balanced RPI all are not candidates under the NCAA RPI. In other words, the NCAA RPI appears to be blocking teams that otherwise might get at large spots from getting Committee consideration.