The NCAA's most recent RPI report, "last updated - October 5, 2011," is available here:http://www.ncaa.com/rankings/soccer-women/d1 Although the date of the report is October 5, it does not include the Monday and Tuesday, October 3 and 4 games.
CP, I have a few questions: 1. Will the NCAA still be seeding 16 teams? If so, how will they determine hosting sites for the other 16 locations since 1st rounds will be at 32 seperate sights? Will it be based on highest RPI? Best bid? 2. Assuming the top 8 seeds all win, are they guarnteed to host the 2nd and 3rd rounds? If so, what is the advantage to being a 9-16 seed? I am guessing the team in which they play (a 3 seed would get a 2 and a 4 seed would have the 1)? 3. With the logistics of the bracket, what challenges do you see the NCAA facing differently than the past? Thanks for all your work!
I haven't seen anything new from the NCAA, so my best guess still is what I said in post #11 on this thread. I think we're all waiting with a lot of interest to see if the NCAA will follow its past practice of favoring the higher seed when deciding game location (or the team with the best RPI if there is not a seeded team), or will shift to a more economic basis for deciding sites when two seeds are to be at the same site or unseeded teams are to be at the same site. If the NCAA is going to use economic considerations in site selection, that obviously poses some challenges since it will involve comparing different bids to each other. This would occur for the 1st round games between unseeded teams, to the second-third round groups that have two seeded teams, and to the quarterfinal games that have two seeded teams. I haven't tried to figure out whether the old format or the new one makes it easier for the NCAA to place seeds so as to give the tournament a more regional structure, given its putting seeded teams in pods (four #1s, four #2s, etc.) rather than seeding #1 - #16. If the NCAA follows its previous game location practice, then the main benefit to teams seeded #9 through #16 are that they get to host the first round game, they won't play another seeded team in the first or second round, and for the #9 through #12 they can't play a #1 seeded team until the quarterfinals. This also is a benefit to the #1 through #8 teams -- the #9 through #16 teams are placed "away" from them in the bracket. You are right, by the way, that the four #1 seeds will play the four #4 seeds, and the four #2 seeds will play the four #3 seeds, if all advance to the round of 16. At least, that's the way they've always done it.
10.9.2011 RPI Report I have posted a new RPI Report at the RPI for Division I Women's Soccer website, covering games through Sunday, October 9. It is in the form of an Excel spreadsheet attachment (downloadable) at the bottom of the website's RPI Reports page. Use the following link to go to that page:https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports The report file title is 10.9.2011 RPI Report. Page 1 of the report shows details of teams' RPIs and strength of schedule, as well as what their RPIs would be under a modified RPI that takes game locations and regional strength into account. Page 2 of the report shows conferences' average RPIs. Page 3 of the report shows regional playing pools' average RPIs. NOTE: I have added some new information to the Report, because I was curious to see what help or hindrance teams will receive to the "strength of schedule" portions of their RPIs, as a result of conference play: On the "RPI Report Source" page, I have added a new column at the far right, named "NC Contribution to Strength of Schedule." What this represents is the team's contribution to its opponents' strength of schedule based only on the team's non-conference games. On the "Conference Report Source page, I have added two new columns, named "NC Contribution to Strength of Schedule" and "NC Contribution to SoS Rank." The purpose of the first of these columns is to show roughly what strength of schedule contributions teams from the particular conference will receive towards their RPIs from the teams' non-conference games. This is an important piece of information. Once conference play begins, with each conference team playing every other conference team (in most conferences), there is a conference winner and a conference loser in every conference game. Ultimately, this means conference games alone will pull a conference team's strength of schedule towards 0.5000. Thus the non-conference results of a conference's teams play a key role in the conference teams' strengths of schedule. From the "RPI Report Source" page, I take the individual team "NC Contribution to Strength of Schedule" numbers, and on the "Conference Report Source" page I average these numbers for each conference to come up the conference's "NC Contribution to Strength of Schedule." By referring to this number, the teams in each conference can get an idea of approximately how much help they will get, from conference play, towards the strength of schedule elements of their RPIs. In other words, how much will conference play help or hurt the strength of schedule portion of my RPI? There is one important caution about these conference contribution numbers. If I am, for example, Marquette, then if I am looking at the "NC Contribution to Strength of Schedule" number for the Big East Conference, the Big East number is not going to be quite right. This is because the Big East number includes my (Marquette's) own non-conference contribution to the other Big East teams' strengths of schedule, whereas my own (Marquette's) non-conference contribution to the Big East should not be counted when considering the Big East's contribution to my (Marquette's) strength of schedule. Thus if I am a team with one of the better non-conference records for a conference, the conference teams' non-conference contribution to my strength of schedule will be somewhat less than the "NC Contribution to Strength of Schedule" number. Conversely, if I am a team with one of the poorer non-conference records, the conference teams' contribution to my strength of schedule will be somewhat higher than the "NC Contribution to Strength of Schedule" number.
Quote: Originally Posted by Morris20 ".... Nebraska's remaining opponents have a win% of .655 at the moment. Exactly how many wins would the Huskers need to raise their RPI given this bump to "element 2?" I can't imagine it's too many, but I'm curious and clearly my "sense" of how RPI works isn't very accurate." I can't answer this exactly, as the RPI has too many moving parts. But the following table will help with the answer. What the table does is show what conference play does for the conference teams' strengths of schedule, based on the non-conference schedules the teams played. The numbers move over the course of the season as the non-conference opponents record additional results. Also, now that conference play has begun, teams' strengths of schedule will tend to move towards 0.500, so that strength of schedule will have a reduced influence on teams' RPI ratings, to the point that by the end of the season the effective contribution of Strength of Schedule to teams' RPIs will be about 50% (notwithstanding the way the RPI formula looks). The table indicates that, as of October 9, the Big 10 was the fourth best conference to be in, in terms of what it will do for teams' RPIs. It's important to note, however, that the conference's number is not the exact number that would apply to any team. For a particular team, you would have to delete its contribution to the number in order to come up with the exact number that relates to that team. Obviously, for strength of schedule purposes, it's better to be in one of the top conferences. On the other hand, for winning percentage purposes, it's better to be lower down where it's easier to win conference games. As stated above, at the end of the season, the effective weight of each of these to aspects of the RPI -- winning percentage and strength of schedule -- is right around 50%. Rank/Conference/Conference "Average" Contribution to SoS Based on Non-Conference Games Code: 1 ACC 0.5646 2 BigTwelve 0.4917 3 SEC 0.4870 4 BigTen 0.4495 5 WestCoast 0.4457 6 PacTwelve 0.4425 7 ConferenceUSA 0.4292 8 BigEast 0.4246 9 BigWest 0.4092 10 AtlanticTen 0.4005 11 SunBelt 0.3864 12 Horizon 0.3844 13 Colonial 0.3789 14 Patriot 0.3746 15 MidAmerican 0.3743 16 MountainWest 0.3691 17 Southern 0.3661 18 Northeast 0.3643 19 Southland 0.3622 20 Ivy 0.3558 21 AtlanticSun 0.3423 22 MissouriValley 0.3310 23 OhioValley 0.3296 24 WAC 0.3293 25 AmericaEast 0.3245 26 MetroAtlantic 0.3205 27 Independent 0.3124 28 BigSouth 0.3067 29 Summit 0.2949 30 BigSky 0.2489 31 GreatWest 0.2352 32 Southwestern 0.1456
In fact, since Nebraska currently sits towards the bottom, they stand particularly to benefit from the RPI's "element 2." When I look at the math involved (with my VERY limited understanding) it doesn't look like they need to win another game - especially if they can pick up a draw or two - to raise their RPI - also, I'm not sure their RPI number needs to rise to raise their place in the rankings as all the mid-majors are getting hosed with each additional match in their leagues.
Morris20, the last part of your most recent post, about the rankings as distinguished from the ratings, is worth remembering towards the end of the season. If you remember, remind me when the season is over. I may be able to run a simulation of Nebraska losing all their games from here on out. From that, we can see where their RPI "would have" ended up and what their rating "would have" been if they'd lost all their games. It would make an interesting and, for those really interested in the RPI "informative," exercise. (Of course, if they lose all their remaining games, we'll know the answer, but I'm not counting on that happening.)
I've replaced the 10.9.2011 RPI Report on the RPI for Division I Women's Soccer website with a new report (same title), due to the earlier report having two incorrect game results. Although the games were not particularly consequential (Albany v Binghamton on 10/6 and IPFW v UMKC on 10/7), the correction of the game results did cause changes in the rankings of some of the top teams. The corrected report is in the Excel file titled 10.9.2011 RPI Report, which is an attachment at the bottom of this page: https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports
In case those of you who follow this thread are not aware of it, my RPI rankings do not exactly match the NCAA's rankings. At this point, as best as I can determine our games data are the same. I do not have access, however, to the NCAA's actual games data base for the RPI (which is not necessarily identical to the data in its Game by Game statistics system). I do have access, via the NCAA's RPI reports, to their won/lost/tied and away/home/neutral totals for the teams, and my totals now match theirs, with the two corrections I made earlier today. The differences between my rankings and the NCAA's are small, although there are quite a few of them. I simply don't know why we are having differences, because we shouldn't be. One possible explanation is that I may be using the incorrect bonus and penalty amounts, but that seems almost impossible since my rankings matched the NCAA's exactly, using the same amounts, for the 2009 season. The chance of that happening, with my using incorrect amounts, is almost 0. A second possibility is a data error. Given the pattern of the differences, that seems unlikely. They're too small for it to be likely they're accounted for by a data error. The last possibility is a programming error. I'm confident I don't have a programming error because I vet my numbers periodically against the nc-soccer numbers, which is an independently programmed system, and our numbers consistently match so long as our data match. If it's not my programming error, however, there could be a programming error at the NCAA. As some of you may remember, I believed it was possible last year that part way through the season, the NCAA inadvertently introduced a programming error into its system. Presumably, the NCAA's system is "protected," but can be "unprotected" on occasion in order to run experiments with RPI formula modifications. I know the NCAA did this at least once last year, and it was immediately after then that I suspected they had introduced a programming error into their system. A minor programming error could result in ranking differences like the ones I'm looking at. If the NCAA published its actual RPI ratings, it would be much easier to try to figure out what the issue is, but they don't. In any event, my ratings -- and those of nc-soccer -- are reliable for practical purposes. I just want people to be aware that they don't exactly match the NCAA's.
I just have posted, at the RPI for Division I Women's Soccer website, my RPI Report covering games through Sunday, October 16, 2011. The Report is in the form of an Excel spreadsheet attachment titled 10.16.2011 RPI Report at the bottom of the website's RPI Reports page. Here is a link to the RPI Reports page: https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports The NCAA will be issuing its next official RPI report on Wednesday. As I reported in the previous post, my rankings close are to, but not identical with, the NCAA's, although the two should be identical. See the previous post for a discussion of what the possible reasons are for the differences. One of the possible reasons that I cited in the previous post is my possibly not using the correct bonus and penalty amounts. Over the last week, I have worked on that possibility and I am as certain as I can be that is not the problem. I have been able to compare my 2009 season ratings (as well as rankings) with the NCAA's ratings (as well as rankings) as of the end of the full 2009 season. On going through that process this week, the result is that for 279 of the teams, my ratings match the NCAA's exactly. Of the remaining 43 teams, for 40 my ratings differ from the NCAA's by 0.0001; for 2 the difference is 0.0002; and for 1 the difference is 0.0004. The likely explanation for these 43 differences is our use of programs with different "rounding" protocols. I use Excel, which as its defaulting protocol for rounding, rounds at 15 decimal places. I believe the NCAA's program rounds at a lesser number of decimal places. When different programs round to different numbers of decimal places while performing complex calculations involving a large number of transactions, it is possible for them to produce different numerical results simply due to the different rounding protocols. This is the only reasonable explanation I can think of for these slight 2009 differences, as the other possibilities I am aware of would result more significant discrepancies than the ones I have described. So, my theory that the NCAA may have introduced an error into its program last year, part way through the season, still stands as the only logical explanation for the differences. Of course, I don't know that's the case, but I can't think of anything else that makes sense.
One team that stands out like a sore thumb in the current RPI is Vanderbilt, which at #42 would be right in "bubble" territory. Massey's ratings have them nowhere close with their rating putting them in 89th place. They haven't played two of the weakest teams in their conference, Mississippi and Mississippi St so their RPI will probably take a hit from that and render the point moot as far as the playoff bracket is concerned. But still... That the RPI should be so off getting this late in the season shows you how inaccurate it can be. A big part of the problem can be shown in how the RPI assesses Vanderbilt's strength-of-schedule. The RPI currently calculates Vanderbilt's schedule as being the 10th toughest in the nation. Massey on the other hand ranks their games played as the 72nd toughest schedule. In addition to their SEC games, Vanderbilt has played: Furman 9-7-1 West Kentucky 10-5-4 Missouri St 10-3-2 South Florida 6-7-4 College of Charleston 10-6 Middle Tennessee 9-7 Memphis 15-0-1 Samford 8-5-2 6 of those games were at home, 1 away and 1 neutral. That's a pretty good stack of won/loss records so you can see right off the bat how RPI would treat Vanderbilt well when it comes to their Element 2 (based on their opponents' won/loss records)
Here's an interesting case that isn't significant itself, but that illustrates what could be significant if it involved different teams. St. Bonaventure played St. Louis this past Sunday. Due to weather and poor field conditions, the game was transferred from St. Bonaventure to Bradford High School in Bradford, PA. The high school is 20 miles from St. Bonaventure. The question is, does this remain a home game for St. Bonaventure or does it become a neutral site game. The NCAA Game to Game websystem reports the game as a neutral site game, so that is how the school(s) reported it to the NCAA. All that game location affects is bonus/penalty points, and these two teams are far down in the rankings, so for this case it isn't worth quibbling about. But, suppose they had been top teams with the game involving bonus points? There is no NCAA policy that expressly discusses game site when a game is moved due to weather and field conditions. But, the following policy is very close and almost certainly is the one that applies: Team A is playing Team B. For results and the Rating Percentage Index (RPI), the game is considered a "home" game for Team A and an "away" game for Team B if the site of the contest: * is a nearby temporary emergency site while the regular home site for Team A is being repaired. Example: Owensboro Sportscenter at Kentucky Wesleyan was damaged in a tornado. This forces Kentucky Wesleyan to move its basketball home game with Southern Indiana to nearby high school gymnasium while the Sportscenter is being repaired. Since the game was played in nearby temporary set-up, it is still considered a home game for Kentucky Wesleyan. However, if no area arenas or gymnasiums are available and the game is moved the 40 miles away to Southern Indiana's home court, then the game is now considered a home game for Southern Indiana. The question in the St. Bonaventure v St. Louis case is, is 20 miles away "nearby" or in the "area" of St. Bonaventure? In other circumstances, this could be a significant question.
Interesting case. Another case which come up more frequently is when two teams play at a "neutral" site which is much closer to one team than the other. This happens frequently in conference tournaments and in pre-conference weekends when one team hosts 3 other teams to play weekend tournament. Example when Stanford invites Santa Clara and two East Coast teams - Santa Clara isn't at home but is it quite "neutral" on the other hand? Same when UCLA invites Pepperdine or Loyola Marymount and two teams from back East. I don't blame the NCAA for not thinking it important enough to define the rules as to when is a game a "semi-home" game for one team and a "semi-away" game for the other but it's an interesting topic that's been mentioned often and I don't know if anyone's studied it extensively to see if there's a significant and measurable advantage for the team closer to home.
Massey has home field pegged at about .36 goals. (so a .72 goal swing) CPT probably has the percentages that home and away teams beat the odds.
Meanwhile coming back to Vanderbilt, I thought it would be interesting to check in with our old friend at nc-soccer, the "Approx Strength" column to get more insight into how Vanderbilt gets the RPI ranking it currently does. Of course they get a big RPI boost (in Elements 2 & 3 - the "strength-of-schedule" components) from playing Memphis with its 15-0-1 current record. But at least we have reason to think Memphis is a very good team, maybe even a very, very good one. So I focused on what kind of boost Vanderbilt gets from playing some of the lesser-known teams. To see the Approx Strength list, go to the NC-soccer website's page on unadjusted RPI and click twice on the column header where it says Approx Strength. That will sort the list from highest to lowest. The higher a team is on the list, the more they will benefit the teams that play them in terms of strength-of-schedule. Missouri State is #47 on the list (as of Tuesday, it may have changed a bit since then as the table gets updated with game results during the week).That's higher than LSU, Miami and Florida State. A win over Missouri State (at least in the unadjusted RPI) will do more for your RPI than a win over any of those three other teams as well as all the teams mentioned below. Western Kentucky is #68 on the list. That doesn't seem terribly high but that's higher than BYU, Missouri, Louisville, North Carolina St, Wisconsin, and Notre Dame, as well as every other team mentioned below. College of Charleston is at #86. That's tied with Michigan and ahead of Texas as well as the teams mentioned below. Samford at #95 is ahead of Ohio St Middle Tennessee at #117 is still ahead of San Diego, Connecticut, Rutgers, ad Washington Furman at #139 is ahead of Portland, Alabama, Iowa St, Minnesota and Nebraska - and still light-years ahead of USC. No wonder there are teams like Vanderbilt that get vaulted into much higher rankings than they should have.
For convenience I'm listing the "Approx Strength" column from the nc-soccer website here (as of Tuesday's games I believe) but I urge those of you interested in the RPI check out the website yourself. That, and the work that cpthomas does, are tremendous resources available to fans of college women's soccer. Most of the top 20 or so teams will seem reasonable enough although LaSalle, Hartford, and Stephen Austin may, or may not, raise some eyebrows. But as you go through the list, when you realize that the teams higher on the list are considered a stronger opponent (as far as calculating your RPI) than the teams lower on the list, it's bound to strike most of you as very odd. Code: 1 Stanford 2 Memphis 3 Oklahoma St. 4 La Salle 5 Milwaukee 6 Duke 7 Pepperdine 8 UCLA 9 Wake Forest 10 Marquette 11 William & Mary 12 Hartford 13 North Carolina 14 Baylor 15 Penn St. 16 Boston U. 17 Florida 18 Stephen F. Austin 19 Virginia 20 Dayton 21 Denver 22 Penn 23 Santa Clara 24 West Virginia 25 North Dakota St. 26 Central Mich. 27 California 28 Tennessee 29 UC Irvine 30 UCF 31 Boston College 32 Georgia 33 Auburn 34 Kentucky 35 Illinois 36 Michigan St. 37 Virginia Tech 38 Iowa 39 Fla. Gulf Coast 40 Texas A&M 41 Georgetown 42 Washington St. 43 Massachusetts 44 Maryland 45 South Carolina 46 Utah St. 47 Missouri St. 48 Radford 49 LSU 50 Miami (Fla.) 51 East Tenn. St. 52 Army 53 Florida St. 54 Harvard 55 Lafayette 56 Oregon St. 57 Richmond 58 South Ala. 59 Southeast Mo. St. 60 Kansas 61 North Texas 62 Campbell 63 Detroit 64 Long Beach St. 65 Rice 66 UC Davis 67 Texas Tech 68 Western Kentucky 69 BYU 70 Missouri 71 Tulsa 72 Louisville 73 North Carolina St. 74 Colgate 75 Seattle 76 Marist 77 UTEP 78 Monmouth 79 Buffalo 80 Texas St. 81 Brown 82 Wisconsin 83 Jacksonville 84 New Mexico 85 Notre Dame 86 College of Charleston 87 Michigan 88 Toledo 89 Fresno St. 90 St. Francis (Pa.) 91 FIU 92 St. Mary's (Cal.) 93 Navy 94 Texas 95 Creighton 96 Samford 97 Rider 98 Yale 99 Western Michigan 100 Kent St. 101 SMU 102 Wyoming 103 Sacred Heart 104 Southeastern La. 105 Ball St. 106 Illinois St. 107 Delaware 108 Ohio St. 109 Vanderbilt 110 Wagner 111 South Dakota St. 112 East Carolina 113 Loyola Marymount 114 UNC Greensboro 115 Syracuse 116 Utah 117 Middle Tenn. 118 Northeastern 119 San Diego 120 Wofford 121 St. Bonaventure 122 Tenn.-Martin 123 Connecticut 124 Winthrop 125 George Mason 126 Rutgers 127 Washington 128 Wright St. 129 Louisiana Tech 130 Seton Hall 131 Colorado Col. 132 James Madison 133 San Diego St. 134 St. John's (N.Y.) 135 Cal St. Fullerton 136 Long Island 137 Robert Morris 138 Fordham 139 Furman 140 Portland 141 Mercer 142 Alabama 143 Cal Poly 144 Western Carolina 145 Cleveland St. 146 Missouri-KC 147 Gonzaga 148 Villanova 149 Iowa St. 150 Miami (Ohio) 151 Canisius 152 Lamar 153 St. Joseph's 154 Hofstra 155 Lehigh 156 TCU 157 Oregon 158 Utah Valley 159 Pacific 160 Valparaiso 161 Akron 162 Ark.-Pine Bluff 163 UC Riverside 164 Minnesota 165 New Mexico State 166 Siena 167 Bryant 168 Davidson 169 Nebraska 170 Longwood 171 Austin Peay 172 Central Conn. St. 173 North Fla. 174 UTSA 175 Cal St. Northridge 176 Arizona St. 177 Towson 178 Butler 179 Liberty 180 Charlotte 181 South Fla. 182 Elon 183 Northern Colo. 184 Troy 185 Providence 186 McNeese St. 187 Old Dominion 188 UNI 189 Belmont 190 Morehead St. 191 UALR 192 Xavier 193 Drexel 194 Ohio 195 Charleston So. 196 UNLV 197 Cincinnati 198 Mississippi St. 199 Fairfield 200 Idaho 201 Arkansas St. 202 UC Santa Barbara 203 Oklahoma 204 Indiana 205 Purdue 206 VCU 207 Eastern Ky. 208 Portland St. 209 Princeton 210 St. Louis 211 Chattanooga 212 High Point 213 Houston Baptist 214 Northwestern St. 215 Sacramento St. 216 SIU Edwardsville 217 Georgia St. 218 Murray St. 219 Niagara 220 Sam Houston St. 221 Vermont 222 Oral Roberts 223 Mississippi 224 Loyola (Md.) 225 Appalachian St. 226 Albany (N.Y.) 227 Houston 228 Clemson 229 Kennesaw St. 230 UAB 231 Western Illinois 232 Boise St. 233 George Washington 234 Ga. Southern 235 Eastern Ill. 236 Rhode Island 237 San Francisco 238 Weber St. 239 Central Ark. 240 Fla. Atlantic 241 Gardner-Webb 242 Montana 243 Mississippi Val. 244 Air Force 245 North Dakota 246 Southern California 247 Nicholls St. 248 Mt. St. Mary's 249 Marshall 250 American 251 Maine 252 Quinnipiac 253 South Carolina St. 254 Arkansas 255 Columbia 256 Loyola (Ill.) 257 Drake 258 Oakland 259 Bowling Green 260 Tennessee Tech 261 UN Omaha 262 Binghamton-SUNY 263 Colorado 264 Jackson St. 265 Stony Brook 266 Dartmouth 267 Stetson 268 Temple 269 Green Bay 270 Fairleigh Dickinson 271 New Hampshire 272 Duquesne 273 Youngstown St. 274 Hawaii 275 IPFW 276 La.-Monroe 277 Citadel 278 Bucknell 279 La.-Lafayette 280 Iona 281 Alcorn St. 282 DePaul 283 Lipscomb 284 Southern Miss. 285 Cornell 286 UNC Asheville 287 Pittsburgh 288 Northern Ariz. 289 IUPUI 290 Presbyterian 291 Francis Marion 292 Cal St. Bakersfield 293 Jacksonville St. 294 Manhattan 295 Northwestern 296 Southern Utah 297 Holy Cross 298 Idaho St. 299 Arizona 300 Evansville 301 San Jose St. 302 Northern Ill. 303 Southern U. 304 Nevada 305 Coastal Carolina 306 Indiana St. 307 Eastern Mich. 308 Howard 309 VMI 310 St. Peter's 311 UNC Wilmington 312 Texas Southern 313 NJIT 314 South Dakota 315 S.C. Upstate 316 Alabama A&M 317 UMBC 318 Eastern Wash. 319 Delaware St. 320 Grambling 321 Alabama St. 322 Prairie View
That became a moot point faster than I thought. I'm kind of sorry Vanderbilt lost because it deprives us of a possibly scandalous situation as far as the RPI is concerned. Still, there's some valuable lessons to be learned from it. As an aside, Massey had Vanderbilt as slight underdogs playing away at Mississippi (44% win probability) giving the most likely result as a 1-1 tie
I have posted a new RPI Report at the RPI for Division I Women's Soccer website, covering games through Sunday, October 23. It is in the form of an Excel spreadsheet attachment (downloadable) at the bottom of the website's RPI Reports page. Use the following link to go to that page:https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports The report file title is 10.23.2011 RPI Report. Page 1 of the report shows details of teams' RPIs and strength of schedule, as well as what their RPIs would be under a modified RPI that takes game locations and regional strength into account. Page 2 of the report shows conferences' average RPIs. Page 3 of the report shows regional playing pools' average RPIs.
Discussions of the RPI are popping up again over at the college women's volleyball forum on VolleyTalk / Proboards website. Generally good discussion in the thread "RPI Overrated?" - some of it very good, some of it from people who really know what they're talking about.
I have posted a new RPI Report at the RPI for Division I Women's Soccer website, covering games through Sunday, October 30. It is in the form of an Excel spreadsheet attachment (downloadable) at the bottom of the website's RPI Reports page. Use the following link to go to that page:https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports The report file title is 10.30.2011 RPI Report. Page 1 of the report shows details of teams' RPIs and strength of schedule, as well as what their RPIs would be under a modified RPI that takes game locations and regional strength into account. Page 2 of the report shows conferences' average RPIs. Page 3 of the report shows regional playing pools' average RPIs.
By the way I should mention one of the better arguments in favor of the RPI which I came across on the Volleytalk Proboards forum. <What does the RPI cause to happen? This hasn't been discussed very much. The PRI causes teams to desire scheduling games against high-RPI teams from other conferences.... In other words PRI encourages high-quality (at least high-RPI) head-to-head matches. That's a good thing for getting real data for good selection.> <<This is great insight, and spot on the mark. In fact, the NCAA makes no bones about it. One of the objectives of using RPI is to get the teams from the top of the conferences to schedule each other. In fact, in the best of RPI world, the preferred scheduling would be that the conference teams all schedule their counterparts in other conferences. That provides the best RPI outcomes.>> Read more: http://volleytalk.proboards.com/ind...on=display&thread=40940&page=12#ixzz1cTCtb89S There's a certain imprecision there in the failure to distinguish between high RPI and high strength-of-schedule value in the RPI (the famous Approx Strength column). Or, more simply even, between high RPI and an opponent's simple won/loss record. But still there's an important point there.
As with any sport where more advanced metrics exist and are roundly ignored by the NCAA, RPI is a hot topic in VolleyTalk as the tournament draws near. I think it's fairly likely that the weaknesses of RPI are better understood here than they are there (thanks in no small part to the work of cpthomas). I'd been away from that board for quite a while, but I've been known to weigh in on matters relating to RPI, in my own attempt to help dispel ignorance. I don't know if you posted it under a different name there or if it was someone else, but there's a follow-up that goes into that level of detail, and in fact, identifies a strong example of the weakness of how teams are considered for "strength" as opponents by RPI.
For purposes of at large selections, I think it's important to see that one can't get away with just scheduling schools that may make good contributions to one's RPI strength of schedule. It's pretty clear that the RPI defines the "bubble" group under consideration for at large selections. Once that group is set, the Committee looks at head-to-head results between teams from the group -- of which there usually are few except when teams in the group are from the same conference. The Committee also looks at results of teams in the group against common opponents, of which there usually are some and sometimes a lot if teams are from the same conference. The common opponent results, however, often are not conclusive. If the decisions are not clear, which happens most of the time I believe, the Committee next looks at results against teams already selected for the bracket and at results over the last eight games. It's in the results against teams already selected for the bracket that teams distinguish themselves as being deserving of playing in the Tournament. What this means is that playing teams that give good contributions to your strength of schedule may be enough to get you into the bubble, but it's most likely not going to be enough to get you into the Tournament. If you want into the Tournament, you're going to have to play some top teams and get positive results against some of them. This is how Auburn did it last year, being the next to poorest ranked team in the bubble but with two outstanding wins -- against #4 Florida and #13 Florida State. When you look at the RPI in this context, it's not so bad for what it does. It may have a couple of teams in the bubble that are different than what a theoretically pure system would have. That can be a concern, but only if the teams the "pure" system would have had in the bubble would have gotten a selection.
For those of you who follow the RPI ratings very closely, it's worth knowing that the NCAA's RPI ratings this week included three of the four Monday games. The one they did not include is the Fairleigh Dickinson v Monmouth game.
I have posted a new RPI Report at the RPI for Division I Women's Soccer website, covering games through Sunday, November 6. It is in the form of an Excel spreadsheet attachment (downloadable) at the bottom of the website's RPI Reports page. Use the following link to go to that page:https://sites.google.com/site/rpifordivisioniwomenssoccer/rpi-reports This has the "unofficial" version of the RPI rankings (with associated ratings) that the Women's Soccer Committee would have used in making its 2011 NCAA Tournament bracket decisions. The ratings match those available at the nc-soccer website. As noted previously, however, we have not been exactly matching the NCAA's "official" weekly published rankings, for reasons I have not been able to figure out. They are, however, very close. The report file title is 11.6.2011 RPI Report. Page 1 of the report shows details of teams' RPIs and strength of schedule, as well as what their RPIs would be under a modified RPI that takes game locations and regional strength into account. Page 2 of the report shows conferences' average RPIs. Page 3 of the report shows regional playing pools' average RPIs.