Use of the RPI for Division I Women's Soccer

Discussion in 'Women's College' started by cpthomas, Jan 25, 2008.

  1. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Well, thanks! I think it's been a darn good thread.

    That is an interesting comparison but I think I said I was especially curious to see the "alternative" Albyn Jones rating that uses no "past history" from previous seasons -- to see how much deviation there is from his ratings that we discuss here.

    I'm convinced we need to scrap the RPI.
     
  2. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    You mean no past history at the beginning of the season? or none at the end of the season?

    One of the assumptions of the RPI that causes so much trouble is the assumption that everyone is equal at the start. It's clear that ain't so.
     
  3. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Nothing from the previous season, I believe, but cpthomas is probably the one here who can best answer that because I think he's talked to Albyn Jones. In other words, using only the results of the current season.

    Naturally at the start of the season, the ratings will be virtually meaningless; but from what I understand by the end of the season the sample size is enough to give results that are reasonably close to the published Albyn Jones ratings.

    As it is, the published ratings give more weight to recent results than earlier ones so the "carryover" effect from the previous season is becoming minimal by the end of the current season. (Again, if I understand it correctly).

    My feeling is this: if the "no past history" version seems to work reasonably well, then there's absolutely no reason for the NCAA to not (at the very least) adopt that version and scrap the RPI since the single strongest objection to Albyn Jones is its use of results from the prior season (or seasons, as it's an ongoing continuous system).
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I am attaching, as a pdf file, the RPI (unadjusted) rankings and rating information covering games through September 21, 2008. There still are a few teams missing (not major ones), which will continue to be the case until all teams have played at least two games. That will not happen until Alcorn State plays its second DI game on October 10.

    We now are a little short of mid-way through the games to be played, so the rankings and ratings don't mean a lot in terms of where the teams willl end up, but if you've been comparing them from week to week you can see that the "heads of state" (to use a cycling term) are starting to move to the top. A lot of the inter-conference and inter-regional games have been played, with another bunch next week, but we still have most of the intra-conference games to play.

    If you compare this week to last week, you'll see a big jump by Portland, to the top of the ratings. The game results leading to that jump help shed light on how strength-of-schedule plays into the ratings. For games through September 14, Portland's opponents' average winning percentage against teams other than Portland (Element 2 of the RPI) was 0.5929. Over the next week, two things happened: (1) Portland beat Florida on 9/19. With Florida winning over Kansas on 9/21, this added Florida's 5-1-1 record to the computation of Portland's Element 2. (This 5-1-1 record does not include the loss to Portland since that loss is excluded in computing Florida's winning percentage for purposes of Portland's Element 2.); and (2) Portland's previous opponents this year went 9-0-1 over the course of the week, thus improving each opponent's winning percentage, significantly in some cases. The result was that Portland's Element 2 rose to 0.6830, which is a large jump and which accounted, with Portland's win, for its jump. This is a pretty good demonstration that it's important not only that a team win its own games but that it's also important that a team's opponents win their games.

    It's important to note, by the way, that the RPI strength-of-schedule computations include only the records of teams already played, and not of teams on the schedule but not yet played. Because of this, once conference play starts, it helps a team if the other conference teams have done well in their non-conference games. It helps because with each conference game, a team will be adding it's opponent's winning percentage to the team's strength of schedule Element 2.
     
  5. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    I don't know about the #1 to 16 thing. In Division III they don't do that. I'm not sure what the regionalization rules are in D1 (my guess is they pretty much seed #1-4 in region, but Florida had to go up to Milwaukee last year and I'm not sure what role seeding played in that decision - I know Portland was the #1 seed but ended up playing in Boulder). I think even in D1 soccer travel costs trump seeding.
     
  6. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    New ruling for #1 seeds-- they stay home.

    No guarantees for anyone else.

    That's why Up is putting such an emphasis on this. They don't want to travel again.
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    As I've said previously, I think the evidence is clear that the RPI is not able to accurately compare teams from the different regions of the US. In particular, the RPI, because it is weighted towards assuming that all regions are equal, discriminates against strong regions and in favor of weak regions. The reason this is a problem is that most teams' games are played within their own geographic regions, even for teams that play a relatively large number of inter-regional games. If one region is stronger than another, that means it is harder for a team from the stronger region to achieve a high RPI because on average it is playing tougher competition. This problem is not corrected by the "strength of schedule" elements of the RPI because the team's intra-regional opponents (i.e., most of the team's opponents) have the same problem the team does: they are playing, on average, tougher opponents.

    Because of this inherent problem with the RPI, I believe the Division I Women's Soccer Committee needs to find a way to figure out the relative strengths of the different regions and to keep the differences in strength in mind as it makes decisions on at large selections and seeding, at least in those situations in which the decisions are not clear.

    Since the criteria for at large selections and seeds specifically disallow consideration of other rating systems and polls, the only place the Committee can go is to the RPI. With that in mind, the best way to compare regions' strength is to delete all intra-regional games from the data base and then compute teams' RPIs based only on inter-regional games. That gives a relatively small database from which each team's RPI is computed, which makes the team-by-team computation not very accurate, but it allows the computation of an average team RPI by region that uses a greater amount of data. There are a couple of limitations to this approach. First, there are some teams that play no inter-regional games. Second, after deleting all intra-regional games, there are some teams that have played some inter-regional games but for which it is impossible to compute an RPI due to the limited data set.

    I've run the computations using only inter-regional games. There are 24 teams that have played no inter-regional games, so they cannot be included in the computations. Of the remaing 296 teams, it is possible, at this point in the season, to compute RPIs based only on inter-regional games for half of them. After next week, when there will be a significant number of additional inter-regional games, I'll run the computations again. I'll continue the process through the season, but after next week most teams will be involved in inter-conference games, with the only remaining inter-regional games being those in the relatively limited number of conferences that span regions. So, after next week and the week after, there is likely to be little change in the regions' average team RPIs based on inter-regional games.

    With the above limitations in mind, here are the regions' average team RPIs for those teams for which it is possible to compute RPIs based only on inter-regional games:

    Central .4982
    Great Lakes .5049
    Mid-Atlantic .4923
    Northeast .4797
    Southeast .4940
    West .5554

    These numbers are somewhat comparable to the RPI numbers for the entire 2007 season and to the regions' average team ratings by both Albyn Jones' SoccerRatings and Massey for the 2007 season. The West region's apparent greater strength, in particular is consistent with all three rating systems' results for 2007.

    Although it is too early to know if this pattern will be the same at the end of the current season, if it stays the same, the difference in strength between the West region and the other regions is very significant.
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    As a matter of interest, I ran the win-loss-tie totals for out-of-region games for the six regions. I also computed the average number of out-of-region games played by each region's teams. Here are the totals:

    Central: 78 wins - 77 losses - 15 ties (2.98 out-of-region games per team)
    Great Lakes: 91-95-16 (3.26)
    Mid Atlantic: 66-81-24 (3.56)
    Northeast: 54-66-18 (3.37)
    Southeast: 90-81-21 (3.15)
    West: 77-56-22 (3.04)
     
  9. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    So you could argue that the reason for the west's superior win%/RPI vs. other regions is that the mediocre teams in the west simply don't play out of region (or if they do, they play against weak competition from other regions). Meanwhile, east coast teams ALL play out of region since the regions are intermixed (i.e. William & Mary is not in the same region as Virginia).
     
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Makes me think that in the next RPI report, I should include the teams' regions, so I'll try to remember to do that. Actually, William and Mary and Virginia both are in the Mid Atlantic region, according to the NCAA. Similarly, Boston College is in the Northeast region, although the ACC primarily is in the Southeast region. In other words, the NCAA's set up of regions for women's DI soccer splits up some conferences' teams among different regions. There was a proposal this year for women's DI soccer to go to eight regions, which apparently would have allowed all teams within a conference to be in the same region. But that proposal was not adopted, at least not for the current season. I did notice, however, that the NCAA this year moved Penn State to the Great Lakes region. The regional numbers I have been providing are based on the six regions with teams as identified by the NCAA as belonging to those regions. (The regional advisory committees match up with the six regions.)

    I assume the reason there are more out-of-region matches on the East Coast is because of geographic factors: the travel distances for inter-regional games are smaller so that travel expenses are less.

    You are correct that one could try to argue that the reasons for the West region's higher average RPI have to do with which teams are playing out-of-region games and with which out-of-region teams the West region teams are playing. Your description of what the arguments might be is what someone might try to argue. As I think about what the specific matchups have been, I don't think they would support that argument, but having that kind of discussion gets pretty subjective.

    What I think would not fly, however, is someone such as the NCAA arguing that the RPI is good for comparing individual teams but not good for comparing regions. If one is willing to treat the RPI as having enough "correspondence" among regions to allow it to compare a team from Region A to Region B, then there should be enough correspondence to allow it to compare Region A as a whole to Region B as a whole. In fact, I think one might expect the "as a whole" comparisons to be more accurate.

    So, my point to the NCAA has been that if it is going to use the RPI, and given that it is demonstrable that the RPI has a big problem comparing teams from one region to teams from another region due to the limited number of games overall and of inter-regional games in particular, then the NCAA needs also to use the RPI to identify the relative strengths of regions. Differences in relative strengths of regions then can become a consideration in the at large and seeding decisions. I've limited what I've suggested to saying that the differences should be a consideration only when the decisions otherwise are doubtful.

    So, although someone can try to make the argument you posed, I don't think the NCAA can make it.
     
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here are some thoughts to maybe generate discussion about why it might be legitimate for the Women's Soccer Committee to rely heavily on a numeric rating system in making its at large selection and seeding decisions. Morris20, this is something you've discussed previously. (This is a different discussion, I believe, than a discussion about what the rating system ought to be.)

    The first issue the NCAA has to face is the problem of having its process seem or actually be based on personal biases and political considerations. The Women's Soccer Committee faces a very difficult basic problem: When it gets to looking at "bubble" teams in terms of at large selection and seeding, there are relatively few head-to-head games among those teams. In fact, there even are not that many instances in which the Committee can compare those teams' results against common opponents. And, if there are results against common opponents, the results often are inconclusive. In other words, the Committee has little tangible game data based on which it can make the hard decisions it must make. And, as Morris20 has pointed out, the Committee at the end of the season has only hours within which to make its decisions. It has advance preparation time, but the last games of the season, which often are critical, aren't completed until sometime Sunday afternoon before bracket Monday.

    So, what's the Committee to do? One option would be to rely on the Committee members' own personal opinions. Another would be to rely on polls. But both those options would raise serious issues of bias, lobbying, and the exertion of political power.

    The only alternative, it seems to me, is to use a numerical rating system. This brings the second issue the NCAA must face: Whatever the system, teams do not play enough games and do not play enough games across regions for any numerical rating system to be accurate enough, from a statistician's perspective, to tell the Committee what decisions it should make about "bubble" teams. In fact, from a statistical perspective, any numerical rating system would require the Committee to consider far more "bubble" teams both for at large selections and for seeding than is practical. And, even if the Committee could consider that larger number of teams, it would have no objective basis for choosing from among them.

    So, faced with this dilemma, what is the Committee to do? The NCAA has opted for using a numerical rating system and treating it as accurate, even though from a statistical perspective it is not accurate. Essentially, it has opted for this as the lesser of the "evils" available. Thus it uses the numerical rating system as though it correctly rates and ranks teams, subject only to variation where head to head results and results against common opponents are inconsistent with the ratings/rankings. (Or where, if there still is not clear direction after considering these factors, results against teams already selected for the bracket and results over the last eight games are inconsistent with the ratings/rankings.)

    After mulling this over for a couple of seasons, I've come to the conclusion that I agree with the NCAA's approach on this issue. Setting aside the question of which numerical rating system the Committee should use, I've concluded that the NCAA's approach is about as good an approach as there can be. I think it's much preferable to the alternative approach that would allow personal bias, lobbying, and political leverage to come into the system. Yes, those factors may come into selecting what the numerical rating system should be, but at least once the basic policy decision about the rating system is made, the rest is a fairly mechanical process that leaves minimal room for those factors to come into play.

    I would like to see the NCAA be more explicit about its knowing that it treats its numerical rating system as being far more accurate than it really is and about its doing this consciously because it is the least "evil" of the possible processes available to it. One of my criticisms of the NCAA is that it seems to want to be hiding this. I'm guessing that the NCAA tries to hide this because it believes people won't understand the legitimacy of what it is doing and thus its decision-making will lose credibility. To me, that seems like an unworthy position for the representative of academic institutions to be taking. I'd rather have the NCAA lay the warts in the process out there, explain why they're necessary, and trust that over time people will learn and agree. But that said, I find myself agreeing with what the NCAA is doing, whether it has made its rationale explicit or not.

    As said above, I think this is a different question than exactly what numerical rating system the NCAA should use.

    What do others of you think?
     
  12. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    I wouldn't have much problem with the NCAA using a numerical rating system, if they actually used it. I've seen the UP experience with the NCAA over the years, and it seems like there is always some reason UP doesn't get the same considerations other teams get.

    Before you start screaming "whiner" . think about this. UP has been the only #1 seed in NCAA history to have to go on the road because of the travel restrictions the NCAA imposes. It has often been the only seeded team that had to go on the road the first two rounds, when other seeded teams get not only the easy seeds, but the comfort of playing those games at home.

    The 350 mile rule is a cancer on the concept of fairness in the tournament. Eastern schools can almost always get favorable seeds at home. There are literally dozens of schools that can get into a tournament under those guidelines. The main issue the NCAA had in the East was the same conference rule, in that those schools were so close that they often had to play each other in early rounds in the tournament. The NCAA has accommodated that problem with recent rule revisions. Now schools from the same conference can't play each other in early rounds.

    But a school like UP is cursed by the current policies. Oregon is a state that is 300x400 miles in extent. Its closest neighbor, Washington, is similarly proportioned. There are only 4 D1 schools In all of Oregon (including Portland) ,and and 4 that I can think of in Washington, so if UP wants to host, one of the other 7 Schools in the region needs to also get into the tournament. That hasn't happened in several years. the 4 Pac10 teams in the region have been pitiful imitations of competitive teams, and the other schools are programs that perhaps aren't fully funded. None of these schools has been able to offer a hosting partner. As a result, UP has had to travel even if it was a #1 seed, because there are no other schools inside the 350 mile radius.

    I find it curious that the schools that benefited were BCS well connected and politically powerful BCS schools, while UP has suffered the consequences of current rules.

    When I see equity. and when I see rules that treat all teams the same regardless of geography, I might be a little more disposed to a numerical system.

    Right now it's a numerical system only when it suits the powerful, so I guess I'm in favor of almost any system but the current one.
     
  13. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    We could have a whole additional discussion on the NCAA's travel policies, which are a particular problem for seeded teams from the Pacific Northwest but also are problems for other West region teams -- USC in rounds 1/2 last year and Santa Clara in rounds 1/2 a couple of years ago, as well as the multiple times Portland has had to travel. I think there also was a Southeast region team that had to travel last year.

    BUT, let's try to hold this thread to discussion about the RPI (and potential competitor numerical rating systems) and how they relate to DI women's soccer.

    I do understand Worshipper's comment to be that he would be more willing to consider holding to a numerical rating system if the NCAA's travel rule were fair. I just prefer not to have a long discussion on this thread of the travel rule itself. Maybe pretty soon we could lay out the complete travel rule on a new thread and discuss it. To me, it definitely is a topic that relates to fairness (i.e., the unfairness of the current system) and that deserves serious discussion.
     
  14. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    To tie up a loose end, someone expressed an interest in seeing how Jones' SoccerRatings with the incorporation of some past history deleted compared to his ratings with some past history included. I indicated I might still have the two sets of ratings from 2006. I checked my files and, unfortunately, I don't. Sorry. Moral: Never throw away anything.
     
  15. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    For those who may be anticipating a new RPI report of games through Sunday, September 28, I probably won't be able to post the report until Tuesday, late afternoon or early evening.
     
  16. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'm attaching, as a pdf file, a new RPI report covering games through September 28, 2008. (For those who read the previous post, I've managed to generate the report while on the road.)
     
  17. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Thanks for the dedication.

    It looks like at 10 games teams are settling out with almost none of the traditionally weaker programs in the top 50.

    Its very instructive that it has taken this many games for that to happen.
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    For those interested in comparisons of the six geographic regions, here are the average RPIs by region, taking into consideration only inter-regional games. Also included are the number of inter-regional games each region played, the number of teams in each region, and the inter-regional games per team for each region. There are 17 teams that played no inter-regional games so that their records made no contribution to these numbers. In addition, a significant portion of the remaining 303 teams have not played enough inter-regional games to allow the system to compute RPIs for them. The result is that it is possible to compute RPIs only for 175 teams, so those are the teams whose "inter-regional RPIs" have contributed to these numbers.

    The numbers, in order, are average RPI, number of games, number of teams, and average games per team:

    Central 0.5049 189 57 3.32

    Great Lakes 0.5090 222 62 3.58

    Mid Atlantic 0.5057 213 48 4.44

    Northeast 0.5007 166 41 4.05

    Southeast 0.4985 226 61 3.70

    West 0.5462 162 51 3.18
     
  19. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    It would be instructive (not that I'm going to do it) to see how the "power conferences" (ACC, SEC, Big East, Big 10, Big XII, Pac 10, C-USA & WCC) inter-conference RPI's look. I think looking at it regionally is deceiving because you're including teams from the NEC, MAAC, SWAAC etc. in the eastern regions where there's really no comparable conference in the west. And it's not like the SWAAC is playing a lot of matches against ACC competition any more than they are playing against the Pac-10.
     
  20. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Regarding which teams are and are not contributing to the inter-regional data, it may not be as deceiving as one might think. This is because of which teams tend to play inter-regional games. Due to the RPI formula, as I've stated, it is not possible to compute "inter-regional only" RPIs for a bunch of teams because either those teams or their opponents or opponents' opponents do not play enough inter-regional games for the system to generate an RPI. Using the SWAAC as an example, at this point the system can generate an "inter-regional only" RPI for only 1 of the conference's 10 teams, so only one of its teams is represented in the regional RPI the data. On the other hand, 11 out of the 12 ACC teams are represented. Generally, although there are exceptions, the power conferences tend to have higher representation and the weaker conferences tend to have lower representation. This makes sense when you think about which conferences' teams are likely to play out-of-region games.

    Here are the conferences and the number of their teams represented in the "inter-regional only" RPI data:

    ACC 11/12 (missing Miami)
    America East 5/9
    Atlantic Sun 6/11
    Atlantic Ten 6/14
    Big East 10/16 (missing Marquette, DePaul, Pittsburgh, West Virginia, St Johns, Seton Hall)
    Big Sky 5/8
    Big South 3/9
    Big Ten 4/11 (it appears a good number of their teams are playing some opponents who at least so far play strictly intra-regionally)
    Big Twelve 8/11 (missing Kansas, Missouri, Iowa State)
    Big West 4/9
    Colonial 8/12 (missing William & Mary, Towson, Delaware, Northeastern)
    CUSA 8/12 (missing Southern Mississippi, Colorado College, UTEP, Marshall)
    Horizon 6/9
    Independent 1/7
    Ivy 7/8 (missing Princeton)
    Metro Atlantic 4/10
    Mid American 12/12
    Missouri Valley 4/7
    Mtn West 5/8
    Northeast 6/11
    Ohio Valley 3/9
    Pac Ten 7/10 (missing Oregon, Cal, Washington)
    Patriot 5/8
    SEC 8/12 (missing LSU, Mississippi State, Tennessee, Georgia)
    Southern 10/12
    Southland 0/10
    Southwestern 1/10
    Summit 2/9
    Sun Belt 5/12
    United 2/6
    WAC 4/8
    WCC 5/8 (missing St Mary's, San Francisco, Pepperdine)

    Also, your observation about the SWAAC and the ACC is correct. However, all the SWAAC teams are in the Central region, so even if we had enough data for them, their RPIs would not affect the RPI numbers for the Southeast region. In addition, SWAAC teams have played, among others, Auburn (SEC), Mississippi State (SEC), Kennesaw State (Atlantic Sun), Tulsa (CUSA), UTSA (Southland but with a number of Big 12 games), SMU (CUSA but with games against Oklanoma State, Mississippi, Kansas, Notre Dame, USC, and Colorado College), Texas Tech (Big 12), Baylor (Big 12), and Alabama (SEC). These other-conference teams' games against SWAAC opponents go not only into Element 2 of these teams' RPIs, but also into Element 3 of all these teams' opponents' RPIs. This actually illustrates the problem for strong regions that don't have conferences as weak as a conference such as SWAAC: the teams at the bottom of the regional food chain that are contributing up the chain to the RPIs of teams higher in the regional food chain are stronger than in regions with conferences such as SWAAC, thus making it harder for the teams above them to achieve good records. The NCAA's counter-argument has been that the teams in the stronger region are able to beef up their RPIs through the effects of inter-regional games. However, the hypotheticals I ran as discussed earlier in this thread and in the paper at the beginning of the thread demonstrate quite clearly that there are not enough inter-regional games to do what the NCAA says the RPI does.

    Just to add some more grist to the mill, here are the win-loss-tie records by region through September 28:

    Central 81-91-17
    Great Lakes 98-100-24
    Mid Atlantic 82-100-31
    Northeast 64-79-23
    Southeast 107-90-29
    West 84-56-22

    Just as the RPI -- and any other statistical rating system -- has trouble rating teams against each other in a national system, it also will have trouble comparing regions. The more data in the system, the better, but even a full season's data leave any system with a large standard error.

    My theory on the regional RPIs only is that they give some idea of regions' strengths and that since the NCAA is committed to using the RPI, it also should use the inter-regional comparisons, with all the caveats that go with it. Again, my view is that the NCAA at least should look to these comparisons when it has teams "on the bubble" for at large selection and seeds and when the other criteria do not indicate a clear choice. In that circumstance, it seems fair to make decisions in favor of teams from stronger regions as indicated by the RPI since it appears more difficult for those teams to generate higher RPIs.
     
  21. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I agree. So, I've run the RPI system after deleting all intra-conference games. What this gives is an RPI for each team based solely on out-of-conference games (which probably provides something of a basis for guessing conference champs). I've then averaged the RPIs within each conference. This gives almost as good a RPI-based ranking of conferences as possible, although there are a few non-conference games remaining this coming weekend, which will allow an even better ranking.

    Here are the results, showing the team, its out-of-conference RPI, its conference, and the conference's averaga RPI:

    North Carolina U 0.6803 ACC
    Duke 0.6622 ACC
    Boston College 0.6611 ACC
    Wake Forest 0.6349 ACC
    Florida State 0.6332 ACC
    Virginia U 0.6135 ACC
    Miami FL 0.5949 ACC
    Virginia Tech 0.5941 ACC
    NC State 0.5855 ACC
    Maryland U 0.5010 ACC
    Clemson 0.4980 ACC
    0.6053 ACC Average

    Boston U 0.6038 America East
    Stony Brook 0.4994 America East
    Hartford 0.4824 America East
    Binghamton 0.4473 America East
    Maine U 0.4439 America East
    UMBC 0.4292 America East
    Vermont U 0.4194 America East
    New Hampshire U 0.4167 America East
    Albany 0.3876 America East
    0.4589 America East Average

    Belmont 0.5831 Atlantic Sun
    Kennesaw State 0.5717 Atlantic Sun
    East Tennessee State 0.5048 Atlantic Sun
    Campbell 0.4932 Atlantic Sun
    N Florida 0.4877 Atlantic Sun
    Florida Gulf Coast 0.4515 Atlantic Sun
    USC Upstate 0.4442 Atlantic Sun
    Stetson 0.4439 Atlantic Sun
    Mercer 0.4187 Atlantic Sun
    Lipscomb 0.2855 Atlantic Sun
    0.4685 Atlantic Sun Average

    Saint Louis 0.6350 Atlantic Ten
    Richmond 0.6187 Atlantic Ten
    Dayton 0.6033 Atlantic Ten
    St Josephs 0.5556 Atlantic Ten
    Charlotte 0.5318 Atlantic Ten
    St Bonaventure 0.5246 Atlantic Ten
    La Salle 0.5049 Atlantic Ten
    Fordham 0.5012 Atlantic Ten
    Rhode Island U 0.4985 Atlantic Ten
    George Washington 0.4984 Atlantic Ten
    Xavier 0.4561 Atlantic Ten
    Massachusetts U 0.4465 Atlantic Ten
    Duquesne 0.4165 Atlantic Ten
    Temple 0.4131 Atlantic Ten
    0.5146 Atlantic Ten Average

    Notre Dame 0.7135 Big East
    Rutgers 0.6371 Big East
    St Johns 0.6362 Big East
    West Virginia U 0.6068 Big East
    Seton Hall 0.5893 Big East
    Villanova 0.5477 Big East
    Marquette 0.5425 Big East
    Providence 0.5389 Big East
    Georgetown 0.5379 Big East
    Louisville 0.5335 Big East
    Connecticut U 0.5274 Big East
    South Florida 0.5200 Big East
    Pittsburgh 0.5138 Big East
    Syracuse 0.5082 Big East
    Cincinnati 0.4726 Big East
    De Paul 0.4241 Big East
    0.5531 Big East Average

    Northern Arizona 0.5465 Big Sky
    Montana U 0.4888 Big Sky
    Weber State 0.4827 Big Sky
    Northern Colorado 0.4415 Big Sky
    Portland State 0.4400 Big Sky
    Idaho State 0.4388 Big Sky
    Sacramento State 0.4328 Big Sky
    Eastern Washington 0.4185 Big Sky
    0.4612 Big Sky Average

    Radford 0.5642 Big South
    Coastal Carolina 0.5305 Big South
    High Point 0.5024 Big South
    Winthrop 0.4781 Big South
    Liberty 0.4622 Big South
    Charleston Southern 0.4512 Big South
    UNC Asheville 0.4195 Big South
    Presbyterian 0.4025 Big South
    Gardner Webb 0.4023 Big South
    VMI 0.3966 Big South
    0.4609 Big South Average

    Illinois U 0.6605 Big Ten
    Penn State 0.6115 Big Ten
    Northwestern U 0.5904 Big Ten
    Minnesota U 0.5829 Big Ten
    Purdue 0.5790 Big Ten
    Michigan State 0.5723 Big Ten
    Ohio State 0.5655 Big Ten
    Wisconsin U 0.5381 Big Ten
    Iowa U 0.5282 Big Ten
    Indiana U 0.5069 Big Ten
    Michigan U 0.4981 Big Ten
    0.5667 Big Ten Average

    Texas A&M 0.7154 Big Twelve
    Texas U 0.6604 Big Twelve
    Colorado U 0.6415 Big Twelve
    Oklahoma State 0.6316 Big Twelve
    Missouri U 0.6237 Big Twelve
    Kansas U 0.6106 Big Twelve
    Texas Tech 0.5253 Big Twelve
    Baylor 0.4889 Big Twelve
    Oklahoma U 0.4741 Big Twelve
    Nebraska U 0.4681 Big Twelve
    Iowa State 0.4516 Big Twelve
    0.5719 Big Twelve Average

    Long Beach State 0.6079 Big West
    CS Northridge 0.5698 Big West
    UC Santa Barbara 0.5642 Big West
    UC Irvine 0.5462 Big West
    CS Fullerton 0.5436 Big West
    U of Pacific 0.5265 Big West
    UC Riverside 0.5180 Big West
    UC Davis 0.5164 Big West
    Cal Poly 0.4950 Big West
    0.5431 Big West Average

    Old Dominion 0.6623 Colonial
    James Madison 0.6289 Colonial
    William and Mary 0.6239 Colonial
    Virginia Commonwealth 0.5658 Colonial
    UNC Wilmington 0.5444 Colonial
    Georgia State 0.5358 Colonial
    Hofstra 0.5348 Colonial
    Delaware U 0.4900 Colonial
    Northeastern 0.4776 Colonial
    George Mason 0.4665 Colonial
    Towson 0.4509 Colonial
    Drexel 0.3820 Colonial
    0.5302 Colonial Average

    UCF 0.6522 Conference USA
    Colorado College 0.6198 Conference USA
    Rice 0.5939 Conference USA
    Memphis 0.5816 Conference USA
    East Carolina 0.5173 Conference USA
    SMU 0.5144 Conference USA
    Marshall 0.4836 Conference USA
    Tulsa 0.4820 Conference USA
    UTEP 0.4794 Conference USA
    Houston 0.4752 Conference USA
    Southern Mississippi 0.4580 Conference USA
    UAB 0.4509 Conference USA
    0.5257 Conference USA Average

    Wisconsin Milwaukee 0.6154 Horizon
    Valparaiso 0.5348 Horizon
    Detroit 0.4850 Horizon
    Butler 0.4846 Horizon
    Loyola Chicago 0.4833 Horizon
    Cleveland State 0.4580 Horizon
    Wright State 0.4580 Horizon
    UW Green Bay 0.4386 Horizon
    Youngstown State 0.3803 Horizon
    0.4820 Horizon Average

    Seattle 0.4708 Independent
    Cal State Bakersfield 0.4523 Independent
    Southern Illinois 0.4431 Independent
    Francis Marion 0.4129 Independent
    Houston Baptist 0.3993 Independent
    North Dakota U 0.3847 Independent
    South Dakota U 0.3585 Independent
    0.4174 Independent Average

    Princeton 0.6017 Ivy
    Dartmouth 0.5899 Ivy
    Brown 0.5890 Ivy
    Harvard 0.5803 Ivy
    Columbia 0.5522 Ivy
    Yale 0.5498 Ivy
    Pennsylvania U 0.5424 Ivy
    Cornell 0.4123 Ivy
    0.5522 Ivy Average

    Fairfield 0.5906 Metro Atlantic
    Niagara 0.5310 Metro Atlantic
    Loyola MD 0.5160 Metro Atlantic
    Siena 0.5119 Metro Atlantic
    Canisius 0.5060 Metro Atlantic
    Manhattan 0.4809 Metro Atlantic
    Marist 0.4068 Metro Atlantic
    Rider 0.3714 Metro Atlantic
    Iona 0.3693 Metro Atlantic
    St Peters 0.3453 Metro Atlantic
    0.4629 Metro Atlantic Average

    Akron 0.5635 Mid American
    Toledo 0.5546 Mid American
    Buffalo 0.5254 Mid American
    Northern Illinois 0.5075 Mid American
    Western Michigan 0.4833 Mid American
    Central Michigan 0.4819 Mid American
    Bowling Green 0.4797 Mid American
    Kent 0.4703 Mid American
    Eastern Michigan 0.4701 Mid American
    Ball State 0.4692 Mid American
    Ohio U 0.4221 Mid American
    Miami OH 0.3299 Mid American
    0.4798 Mid American Average

    Missouri State 0.5513 Missouri Valley
    Creighton 0.5413 Missouri Valley
    Evansville 0.5173 Missouri Valley
    Illinois State 0.4994 Missouri Valley
    Northern Iowa 0.3826 Missouri Valley
    Indiana State 0.3232 Missouri Valley
    0.4692 Missouri Valley Average

    BYU 0.5912 Mountain West
    TCU 0.5881 Mountain West
    UNLV 0.5829 Mountain West
    New Mexico U 0.5654 Mountain West
    San Diego State 0.5544 Mountain West
    Utah U 0.5360 Mountain West
    Air Force 0.4512 Mountain West
    Wyoming U 0.4045 Mountain West
    0.5342 Mountain West Average

    Quinnipiac 0.5472 Northeast
    Long Island 0.5379 Northeast
    Monmouth 0.5252 Northeast
    Central Connecticut 0.4726 Northeast
    St Francis 0.4663 Northeast
    Robert Morris 0.4383 Northeast
    Fairleigh Dickinson 0.4309 Northeast
    Bryant 0.4127 Northeast
    Sacred Heart 0.3858 Northeast
    Wagner 0.3812 Northeast
    Mount St Mary 0.3352 Northeast
    0.4485 Northeast Average

    Jacksonville State 0.4814 Ohio Valley
    UT Martin 0.4695 Ohio Valley
    Murray State 0.4573 Ohio Valley
    Eastern Kentucky 0.4410 Ohio Valley
    SE Missouri 0.4352 Ohio Valley
    Tennessee Tech 0.4044 Ohio Valley
    Morehead 0.3656 Ohio Valley
    Eastern Illinois 0.3585 Ohio Valley
    Austin Peay 0.3552 Ohio Valley
    0.4187 Ohio Valley Average

    Stanford 0.7011 Pac Ten
    UCLA 0.6579 Pac Ten
    California U 0.6505 Pac Ten
    USC 0.6482 Pac Ten
    Washington U 0.6275 Pac Ten
    Arizona State 0.6122 Pac Ten
    Oregon U 0.5947 Pac Ten
    Arizona U 0.5882 Pac Ten
    Washington State 0.5719 Pac Ten
    Oregon State 0.4940 Pac Ten
    0.6146 Pac Ten Average

    Lehigh 0.5053 Patriot
    Navy 0.5043 Patriot
    Army 0.4910 Patriot
    Bucknell 0.4876 Patriot
    Colgate 0.4525 Patriot
    American 0.4276 Patriot
    Holy Cross 0.4137 Patriot
    Lafayette 0.3583 Patriot
    0.4550 Patriot Average

    Florida U 0.6738 SEC
    South Carolina U 0.6338 SEC
    Vanderbilt 0.6062 SEC
    LSU 0.5947 SEC
    Georgia U 0.5646 SEC
    Auburn 0.5527 SEC
    Kentucky U 0.5355 SEC
    Alabama U 0.5283 SEC
    Tennessee U 0.5187 SEC
    Arkansas U 0.5172 SEC
    Mississippi U 0.5097 SEC
    Mississippi State 0.4763 SEC
    0.5593 SEC Average

    College of Charleston 0.6096 Southern
    UNC Greensboro 0.5851 Southern
    Davidson 0.5624 Southern
    Elon 0.5610 Southern
    Furman 0.5462 Southern
    Western Carolina 0.4919 Southern
    Georgia Southern 0.4638 Southern
    Appalachian State 0.4565 Southern
    Samford 0.4523 Southern
    Wofford 0.4426 Southern
    UT Chattanooga 0.3965 Southern
    The Citadel 0.3489 Southern
    0.4931 Southern Average

    Northwestern State 0.5022 Southland
    McNeese State 0.5008 Southland
    UTSA 0.4748 Southland
    Texas State 0.4657 Southland
    SE Louisiana 0.4622 Southland
    Sam Houston State 0.4306 Southland
    Stephen F Austin 0.4233 Southland
    Lamar 0.4013 Southland
    Nicholls State 0.3440 Southland
    Central Arkansas 0.3427 Southland
    0.4348 Southland Average

    Alabama A&M 0.4073 Southwestern
    Alabama State 0.3918 Southwestern
    Southern U 0.3691 Southwestern
    Grambling 0.3530 Southwestern
    Mississippi Valley 0.3454 Southwestern
    Arkansas Pine Bluff 0.3237 Southwestern
    Prairie View A&M 0.3165 Southwestern
    Jackson State MS 0.2981 Southwestern
    Texas Southern 0.2619 Southwestern
    0.3408 Southwestern Average

    Oral Roberts 0.5188 Summit
    South Dakota State 0.5169 Summit
    Southern Utah 0.4357 Summit
    Western Illinois 0.4297 Summit
    North Dakota State 0.4243 Summit
    IUPU Indianapolis 0.4111 Summit
    Oakland 0.4060 Summit
    Centenary 0.3913 Summit
    IPFW 0.3771 Summit
    0.4346 Summit Average

    Denver 0.6200 Sun Belt
    Western Kentucky 0.5533 Sun Belt
    Arkansas Little Rock 0.5207 Sun Belt
    North Texas 0.5200 Sun Belt
    Middle Tennessee 0.5077 Sun Belt
    Troy 0.4600 Sun Belt
    Louisiana Monroe 0.4349 Sun Belt
    Louisiana Lafayette 0.4284 Sun Belt
    Florida Atlantic 0.4098 Sun Belt
    Arkansas State 0.3960 Sun Belt
    Florida International 0.3902 Sun Belt
    South Alabama 0.3825 Sun Belt
    0.4686 Sun Belt Average

    Longwood 0.5210 United
    Utah Valley State 0.4637 United
    Howard 0.3796 United
    Delaware State 0.3703 United
    NJIT 0.3343 United
    SC State 0.3117 United
    0.3968 United Average

    Hawaii U 0.5210 WAC
    Boise State 0.4938 WAC
    Utah State 0.4872 WAC
    San Jose State 0.4403 WAC
    Fresno State 0.4353 WAC
    Nevada U 0.4316 WAC
    Idaho U 0.3994 WAC
    Louisiana Tech 0.3988 WAC
    0.4509 WAC Average

    Portland U 0.7222 West Coast
    San Diego U 0.6210 West Coast
    Loyola Marymount 0.6015 West Coast
    Santa Clara 0.5751 West Coast
    St Marys 0.5726 West Coast
    Pepperdine 0.5273 West Coast
    Gonzaga 0.4931 West Coast
    San Francisco 0.4741 West Coast
    0.5734 West Coast Average

    Arranging the conferences in order of strength based on the above table:

    Pac Ten .6146
    ACC .6053
    WCC .5734
    Big Twelve .5719
    Big Ten .5667
    SEC .5593
    Big East .5531
    Ivy .5522
    Big West .5431
    Mtn West .5342
    Colonial .5302
    Conference USA .5257
    Atlantic Ten .5146
    Southern .4931
    Horizon .4820
    Mid American .4798
    Missouri Valley .4692
    Sun Belt .4686
    Atlantic Sun .4685
    Metro Atlantic .4629
    Big Sky .4612
    Big South .4609
    America East .4589
    Patriot .4550
    WAC .4509
    Northeast .4485
    Southland .4348
    Summit .4346
    Ohio Valley .4187
    Independent .4174
    United .3968
    Southwestern .3408
     
  22. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Still a lot of screwy stuff. (No surprise there.). One thing that stands out: USC at #14, which is absolutely ridiculous. The defending champs' only blemish an away loss to one of those Portland schools...

    Obviously their RPI is being hurt by the double whammy of Georgia (2-7-1) and Santa Clara's (3-6-1) poor won-loss-- but those teams are obviously much better than their mere records would indicate (although no one's saying this is one of the Santa Clara powerhouses of years past).

    Look at who Georgia's played: besides USC those losses have mainly come against Virginia, Stanford,North Carolina, Duke, and Florida! What a tough schedule!

    Albyn Jones still has them at a respectable #47 but of course they're off the RPI charts as is Santa Clara.

    West Virginia is down at #32 (A-J #14)

    Some teams that the RPI overrates dramatically (compared to Albyn Jones 9/21 rating which is a week behind):

    Old Dominion: RPI 24 / AJ 80
    Michigan State 25 / AJ 54
    Georgetown RPI 34 / AJ 62
    Richmond RPI 38 / AJ 90
    St John's RPI 51 / AJ 81
    College of Charleston RPI 52 / AJ 110
    NC State RPI 56 / AJ 92
     
  23. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Well, I think USC has no one to blame but themselves.

    The Portland trip you mention is a case in point. Sure, they lost to UP away, but I don't think that's the cause of their RPI woes.
    On that same trip, they scheduled Portland State, which is now 1-8-2, and compounded it by scheduling georgia at 2-7-1 and SMU at 3-6-1.

    I understand that you can't always predict how schools will fare 2 or three years in advance, but none of these teams are doing anything unforeseen.

    Playing and beating any of those teams is probably worse for the RPI than not having played them at all. The whole idea of the RPI is that it rewards paying tough teams and beating them.


    The USC ranking just shows that the RPI functions as designed.
     
  24. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Well, I think USC has no one to blame but themselves.

    The Portland trip you mention is a case in point. Sure, they lost to UP away, but I don't think that's the cause of their RPI woes.
    On that same trip, they scheduled Portland State, which is now 1-8-2, and compounded it by scheduling Georgia at 2-7-1, Bakersfield at 4-7-1, and SMU at 3-6-1.

    The Santa Clara whammy is mostly incidental to that, and SCU's RPI isn't as low as the oher teams because they have a tougher schedule.


    I understand that you can't always predict how schools will fare 2 or three years in advance, but none of these teams is doing anything unforeseen.

    Playing and beating any of those teams is probably worse for the RPI than not having played them at all. The whole idea of the RPI is that it rewards playing tough teams and beating them.


    The USC ranking just shows that the RPI functions as designed. They scheduled weak teams, and they are paying for the decision.
     
  25. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Please.

    The RPI screws other teams besides Portland.

    Portland State? I believe UP has played Portland State a few times. Hmmm, as recently as 2007 in fact.

    Now Portland State is a weak program but "none of these teams are doing anything unforeseen?!!" How about Georgia which finished last year at 18-4-2? Their coach is Pat Baker - I think he's the guy who built Florida State into a good program and then Mark Krikorian took over when Baker left for Georgia. (Georgia's record in 2005 and 2006: 12-6-2 and 10-9). Georgia has an ambitious schedule this year and it turned out to be too much for them, but they're still a very decent team as the Albyn-Jones ratings still show. And the RPI doesn't.

    This isn't just about Portland, you know.
     

Share This Page