The Ivies being in the Tournament this year has nothing to do with which teams might have been in the Tournament with a better rating system and which teams might have been out. The Ivies would have been in the Tournament under all the the rating systems I am aware of. The seeding, however, might have been different and they might not have had home field advantage in every first round game. Further, the question is not how good they are. The question is how good they proved themselves to be based on their performance over the course of the regular season. A careful review of who their opponents were and how they did against their better ranked opponents, even using only the RPI rankings to see, suggests they were significantly overrated by the RPI. A number of years ago, I wrote an article showing how a conference, by all teams scheduling to have good non-conference records, could trick the RPI and move from being a normal mid-major to being a top conference. The Ivies, whether by intent or by being in the Northeast and having travel limitations, has matched what I described in that article. This is exactly why basketball stopped using the RPI.
This is an important distinction. Many power rankings provide a prediction of how well one team would do against another rather than a rating system that rewards teams for their performance.
To present the counterpoint, scheduling tough out of conference games won’t help if you lose them all. They have to be strong enough to get some results. Also, don’t forget they only play 7 conference games (plus the Ivy tournament now), which leaves plenty of room for OOC games. Other mid majors play 10-12 conference games and if their conference is weak, that will drag down their RPI.
I’m going to guess Columbia and Princeton likely get beat, while Brown and Harvard win. Ivy probably had two top 16 caliber teams and 4 in the top 32. Not bad for a conference of 8 teams.
The fact that an organization like the NCAA, that purports to care about fairness and opportunity for all student athletes, can still support the use of a flawed rating system like the RPI that one of its revenue generating sports already debunked years ago is absolutely insane and should be another nail in the coffin for the NCAA as an organization as we know it. I would be shocked if the ACC and others aren't pounding down the doors after this season. I would hope they at least add it to their list of grievances to the NCAA about the handling of other issues (transfer waivers, eligibility, etc.)
I just have published my ANNUALLY UPDATED EVALUATION AND COMPARISON OF RATING SYSTEMS: NCAA RPI, KP INDEX, AND BALANCED RPI. Although I cannot comment on the KPI as applied to other sports, the data so far suggest that the KPI is an even poorer rating system than the RPI for Division I women's soccer. Hopefully, coaches are getting fed up with the RPI and will start demanding a change to a better system such as the Balanced RPI or Kenneth Massey's system.
I have posted a refined update on the Women's Soccer Committee's NCAA Tournament Decision Patterns for seeds and at large selections, based on the Committee's Tournament decisions from 2007 through 2023. The update shows that overall, the Committee has been quite consistent in its patterns over the years. In addition, it shows that the RPI affects every aspect of the decision process.
Traveling a bit out of my usual lane, I have put together an opinion piece on the 2023 NCAA Tournament first round matchup in which UC Irvine upset overall second seed UCLA. My opinion: UCLA never should have had to play UC Irvine in the first round. Check out the facts I provide and see what you think: THE NCAA'S BIG MISTAKE IN SETTING UP THE 2023 NCAA TOURNAMENT BRACKET.
Nice work. I agree but I have a feeling we may be yelling at clouds. As a non-revenue-generating sport, the cost will always be a factor that works against the West where NCAA qualifying opponents are more spread out geographically. In addition to UCLA having a tougher first-round game than they should have, the #2 seed in the group Stanford, who was in the running for a #1 seed, had a first-round game against Pepperdine. The other opponents for the #2 seeds were Florida Golf Coast, Central Connecticut State, and Grambling.
Agree - but any low seed that makes a deep tournament run is going to look in hindsight like a mistake. For good or bad Strength of Conference plays into the seeding. Based on past perception -"ACC bias" and eventual results, maybe a couple of more ACC teams should have got bids, Look at Big Ten vs ACC results last tourney. Would have been interesting to see how deep other ACC teams, who did not get bids, could have gone even if they received a low seed.
The inadequate Pittsburgh seed was an error, in my opinion, due to a poor Committee judgment. The nature of the error, however, was completely different than the error of having UCLA play UC Irvine in the first round. The source of the Pittsburgh error was the Committee members and their evaluation of the team profiles. The source of the UCLA-UC Irvine error was the defective nature of the RPI coupled with the requirement the Committee use it, the structure of the Committee, and the NCAA travel cost limitation policies. In other words, the Pittsburgh error was simply an avoidable Committee goof. The UCLA-UC Irvine error was almost inevitable given NCAA policies. Or, to look at it differently: The Pittsburgh error could have happened in basketball (although probably wouldn't have happened due to the greater demands and expectations placed on Basketball Committee members). The UCLA-UC Irvine fiasco never would have happened in basketball. I am confident the NCAA knows well that its policies for non-revenue sports can virtually force situations like the UCLA-UC Irvine first round game. They don't care enough even to make changes they could make without busting their budget, such as by ash canning the RPI for a better rating system. I place some of the responsibility for this on NCAA staff, who have a lot on their plates and highly prioritize basketball so that they have a tendency to be negative when it comes to improvements for non-revenue sports. I also place responsibility on the college coaches branch of the United Soccer Coaches, whose voices being raised to a very high level almost certainly is a prerequisite for getting any change.
You do amazing work and you aren't wrong about the RPI. That said, most coaches (especially those outside the Power whatever) have way more important issues to deal with these days. Many Power whatever's themselves are probably more distracted right now by NIL and transfers than to spend capital on the RPI. Add to that, any system will get complaints and arguments that someone is cheesing the system (see below). There is no perfect system when you have over 300 teams not playing similar schedules and VASTLY different levels. There is only somewhat better and that is not worth most coaches time. https://www.espn.com/mens-college-b...elt-clemson-tigers-brad-brownell-net-rankings
I just have posted an article showing the Women's Soccer Committee's NCAA Tournament batting average when it comes to seeding teams and assigning home field advantage to unseeded teams, 2011 through 2023. Some of the take aways: The Big 10 outperforms the Committee's expectations. The Big 12 and SEC underperform the Committee's expectations. The Big West and West Coast conferences outperform the Committee's expectations, apparently at the expense of the Pac 12. The ACC performs about as the Committee expects. For details and more about how the conferences and regions perform in relation to Committee expectations, the article is here: A LOOK AT THE COMMITTEE'S NCAA TOURNAMENT BRACKET DECISIONS BATTING AVERAGE, 2011 - 2023
I have shown that the current NCAA RPI is a defective rating system. But does it really matter? Would it make a difference if the NCAA instead were to use the Balanced RPI, which does not have the RPI's defects? This is the topic of my most recent article: NCAA TOURNAMENT: LIKELY AT LARGE CANDIDATE POOL AND SELECTION CHANGES IF THE COMMITTEE HAD USED THE BALANCED RPI RATHER THAN THE NCAA RPI. Teaser: Yes, it matters. It matters especially to the Big 10, which is hurt by the NCAA RPI to the tune of 1 lost at large position per year, on average. And it matters to conferences in the West and also the ACC, that are losing at large positions.
Random question: I've noticed lately that some teams (maybe because of pandemic era travel restrictions, maybe also because of bloated super conferences that don't allow traditional rivals to play every year) are scheduling conference teams as non-conference games. The main example I'm thinking of is UNC and Duke, which recently have played twice in the regular season with only one match counting towards conference play. On paper it seems like a win win for both teams, usually both high in the RPI. But is it? And could it actually be hurting the conference overall in the RPI ratings?
For those two teams, it is fine from an RPI perspective (unless Duke has a year like 2023 when they had a winning percentage below 0.500, which would not have helped North Carolina). For the rest of the teams in the conference, it is a slight RPI negative. Assume you are another ACC team with both Duke and North Carolina on your schedule. Whichever wins the game between the two of them has chalked up a win that benefits its winning percentage. Whichever loses has chalked up a loss. Or, if the game is a tie, each has chalked up a tie. Since you play both of them, you get a win-loss or a tie-tie. Element 2 of the RPI is the average of your Opponents' Winning Percentages, so since you have played both of them, your Element 2 gets pulled towards 0.500 (Win-Loss or Tie-Tie). ACC teams have Opponents' Winning Percentages above 0.500, so the extra game has pulled your OWP in the wrong direction. And, the same happens again in RPI Element 3, which is Opponents' Opponents' Winning Percentages. For this happening only once for a strong conference is not a big deal, but if a bunch of teams did it, it could become a problem. And, when it is a problem, it most likely will be for teams that are borderline for NCAA Tournament at large selections. It could drop them down an RPI rank position or two, thus diminishing their profiles as competitors for the last unfilled at larges.
For those with interests in the details of the RPI and the NCAA Tournament bracket formation process, I just have completed my annual update of the RPI for Division I Women's Soccer website. New this year is an additional method for evaluating how well different rating systems' ratings correlate with actual game results. This new method, like the other one I use, confirms that the NCAA RPI cannot properly rate teams from a conference in relation to teams from other conferences and teams from a region in relation to teams from other regions, due to the way the NCAA has constructed the formula. It confirms the KPI likewise has a problem in relation to conferences, but suggests -- unlike the other method I use -- that it may be ok in relation to regions. I say "may be" because there are KPI data for only the years since 2017, which I consider a relatively small sample for this kind of evaluation. The new method also confirms that my Balanced RPI and Massey's ratings do not have these problems.
The RPI and its adjustments just select the teams and seed the bracket. It does not guarantee (or even suggest) what the results of the bracket games should be.
The RPI's problems are in relation to how it rates teams during the regular season. That is what the evaluation systems address.
A random selection of seeds also fulfills your definition. I do appreciate the use of "guarantee". Glad you removed that confusion.
A serious question, since I am interested in how serious a commentator you are on this: Have you actually read all the material at the RPI for Division I Women's Soccer website on the comparison of the NCAA RPI to the Balanced RPI (and Massey)?