Use of the RPI for Division I Women's Soccer

Discussion in 'Women's College' started by cpthomas, Jan 25, 2008.

  1. transplanted

    transplanted New Member

    Jul 27, 2008
    I thought I was going to be really smart and know the definition of the word biased. The defintion I was using was one of the definitions in the library and I was fired up. The dictionary says "exibiting prejudice". I don't think the RPI does that. I WAS ON A ROLL. Then I read the second definition. "Tending to yield one outcome more frequently than others in a statistical experiment." As you have proven the RPI can do that for stronger regions. I WAS REBUKED.

    This is why I still do not think it matters. I think the national tournament's job is to find the National champion. The job is not to really have the top 64 teams in the country. If it was, they wouldn't have automatic bids. I think for the most part the RPI does a good job of picking the best within the region. I think that Oregon did get a raw deal. I think they had a great season and were probably in the top 64 teams in the country. I also don't think they had a chance at winning a national championship. I think they had shown they were not one of the very top teams in the very, very strong west region and even if they went on a run they were not going to win a national championship. If a team got in from a different region because the west was slightly stronger, I don't think if effected the national champion. I look at in a very similiar way to youth soccer. Nobody believes that the state champion from Wyoming or Alabama is going to be able to beat any of the top 3 or 4 teams from North Texas or Southern California, but it's a regional tournament. You might not believe that the midwest is as deserving of as many spots as the west because the west has the PAC ten and the WCC, (two of the 3 strongest conferences in my humble opinion) or the southeast deserves more with the ACC, but my solution has always been the same. Just be better. To the "mid-major" coach who won 18 games playing against the sister of the poor and is upset because they were upset in the conference tournament my answer is this. Schedule some big dogs and take yourself off the bubble. To the "major conference coach" who doesn't get in. Win more games. To the team from the strongest region who doesn't get in when lesser teams do. Win more games in your region and be better.

    CPthomas makes a great case. I have learned to never argue with a stat guy, but I think the system gets the national championship contenders in there and keeps it a national tournament. It's not "fair" to everyone, but every sport has this debate. How do you pick the national champion in football? Let's have a playoff. The second we have an eight team playoff there will be four teams that are getting "ripped off" because of they all think they should have been that 9th team. Every year during the men's basketball tournament they talk of expanding the field from 65 to more. There is always going to be hard feelings, but the best do this. Win enough games and take it out of the NCAA's hands. Then hope they can't save a buck by sending you to Boulder.
     
  2. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Your concept that the strength of regions needs to be accounted for parallels mine. We simply differ a bit on how to implement that. I'm pretty much in favor of either system in the place of the current one. Mine, of course is vastly superior:D

    On the subject of using polls, I have mixed feelings. The polls do offer a differing view and are varied enough to give more than one side to a story. However, I have issues with the composition of the polsters.

    The NSCAA has representatives from each region, but perhaps these folks sometimes have a stake in the outcome, so maybe alternates for each region should also be used.

    Soccer Times has too many representatives from some conferences and none from others. People tend to have more respect for teams in their own conferences to the exclusion of other conferences. I challenge a current situation with respect to my favorite team. Portland and USC have identical records. UP has played slightly higher rated teams and Beat USC head to head, yet is a spot lower. Possible explanation? Three PAC10 coaches on the panel and one WCC coach on it. I had a communication with Soccer Time's Gary Davidson about this issue a couple years ago (at the time there were, as I recall 5 PAC10 coaches on the panel and NO WCC coaches. As part of his explanation, Gary told me that he had asked Clive Charles, but he declined. Well, maybe he had asked Clive, but we all know he was sick with cancer and likely declined for that reason, but I asked Gary about it three years after Clive died, so he could surely have found another coach. I offered to help). If Soccer time limited participation to 1 coach from each conference, I'd trust the poll more.

    Soccer America doesn't reveal its panel, so we don't know if it's just Anson Dorrance and Chris Petrochelli , or who it is. Too scary


    Soccer Buzz declares its panel to be "in house" If they would at least publish some names, we could know who to be mad at. also, it's a small organization and East Coast based, which I think affects how they see the world.

    TopDrawer soccer is also "in House". At least in that case, we can surmise it's Robert Ziegler. I like his poll not because it tries to project the best teams, but who the 64 tournament teams will be and how they will be seeded, a refreshing difference. My main issue with him is that he doesn't seem to get out West much, as is demonstrated by his other main task, rating future NCAA players. (He misses a LOT of Western talent.) If he had help in the West, I'd trust his rating more.
     
  3. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006

    OK, but the RPI does something else, and that is to determine the seeds. Each contest has a statistical probability. No one believes that a #1 is going to have trouble with a #64, especially since that contest will be at home for the low seed, Just as a contest between #31 and #32 will have about 50:50 odds. The low seeds get a much easier path through. That's OK in My book if they earned it, but if a team gets hosed by having to play a series of tough teams before it ever gets to the final 8, the RPI tends to influence outcome, not just pick the contestants

    So it does matter, it's not just throwing teams in the pot expecting the best to rise to the top. It is important to get it as right as can be.
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Transplanted, I appreciate your perspective. If the NCAA were to say, "The purpose of the Tournament is solely to select the champion and is not to provide an opportunity for the best teams to compete," then that is something we all could talk about in terms of whether it is a good policy or not. The stated policy of the NCAA, however, is to provide a chance for the best teams to compete. Specifically, here is the policy of the Championships/Competition Cabinet (reconstituted and renamed this year, but so far as I know this policy remains the same):

    NCAA Division I Championships/Competition Cabinet

    "Policies and Operating Procedures

    "8. Working Principles for Seeding, Pairing and Site Selection

    (Championships Other Than Men's and Women's Basketball)

    "The cabinet adopts the following, recognizing the unique nature of each sport:

    "A fair and equitable championship should be created to provide national-level competition among the best eligible student-athletes and teams of member institutions, with consideration also for approved regional structures for certain championships.

    "Access to championships shall be provided by a combination of automatic qualifiers and at-large selections (the cabinet previously approved guidelines based on sponsorship percentages)."

    I agree that having automatic qualifiers means that the NCAA does not get, or even attempt to get, the top 64 teams. However, for the 34 at large selections, the NCAA's clear policy is to select the 34 best teams.

    If that is the NCAA's policy, then I think my suggestion is reasonable that the NCAA use the RPI to determine regional strength and use that as a consideration when its other decision criteria do not otherwise lead to an obvious decision. That's relatively easy for the NCAA to do and is consistent with its policy. On the other hand, if the NCAA really doesn't want the best 34 at large teams, and doesn't really want seeds in an order based on strength, it should say so and should enunciate what its alternative policy is. (For example, the NCAA could say that its policy is to distribute the 34 at large spots regionally around the country in proportion to the number of teams in each region. This is what the RPI tends to do, offset only to the extent that there is sufficient inter-regional competition to damp down the tendency. Or, the NCAA could say that it has two conflicting policies, each of which it will take into account.)
     
  5. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006

    That helps to show why the smart West Coast schools are scheduling East coast. And why it may be smarter to schedule schools in the 30-80 RPI range than top 10.


    Any system where you can "work" the results like that needs revision.
     
  6. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    My examples of Marist, Western Ky, etc may have fueled some misunderstanding here. It isn't the small colleges like them that usually benefit from the flaws in the RPI -- it's big conference teams that either play them or (even more likely) big conference teams who play other teams that play them. By not getting a better measure of a team's strength of schedule (which also means not getting a better measure of the strength of schedule of the opponents of that team), the RPI benefits certain teams unfairly. cpthomas, for example, has shown how it tends to benefit schools outside the West.

    If you tend to play (or some of your conference rivals tend to play) the better programs in conferences like Conference USA, you'll benefit because those teams will tend to have good records because of all the weaker teams in those conferences. It pads the RPI, not so much of the top teams in a weaker conference but, a) the teams that play those teams, and b) the teams that play the teams that play those teams (the opponents of the opponents)


    *** note: wow, I'm behind the conversation here! Lots got posted before I got around to sending this in...
     
  7. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    There's another problem with the selection Process that favors big conferences. The new ruling that prevents 1st and 2nd round teams in the same conference from playing each other is gear towards 3 of 4 specific conferences where this is an issue, specifically the ACC Big 10, Big East, and Pac10.

    I think its an OK rule, though it flies in the face of the previously state guiding principle that the best schools should play each other seems violated.

    The more equitable principle is ignored, however. Currently there is a 350 mile limit for teams to be considered "close" to each other so that a higher seed can host. That works fine in the conferences mentioned above, but it doesn't always work well in conferences spread out over a large geographic area . The West coast conference , for instance, is spread out over 1500 miles or so, so the conference rule almost never affects it. But the 350 mile rule is brutal, because it means that schools don't have much chance of hosting even if they are #1 seeds. There are only 5 schools Portland could play against, which means that even if it was #1 in the country, it would have to travel. The entire stat of Oregon is 300x400 miles. Washington, Nevada, Californa, and Idaho are similarly proportioned.

    I propose instead that the closest schools be defined as the closest 30 schools or so, so that the distribution would more closely approximate the East coast distribution.

    This isn't a hipothetical, by the way, it has actually happened.
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    First, Morris20, it's great to have you participating in this thread, since having someone with actual experience with the NCAA's committee process is a great plus. I hope you'll keep contributing.

    Kolabear suggested it would be interesting to see the differences between Albyn Jones' SoccerRatings and the RPI. In the attached pdf file is a table comparing the 2007 rankings from the three Division 1 Women's Soccer ratings systems that I know of: (1) The NCAA's RPI; (2) Albyn Jones' SoccerRatings; and (3) the Massey system. These are rankings based on those ratings as of the completion of the 2007 regular season (including conference championships). In other words, for the RPI, these are the rankings the NCAA Women's Soccer Committee used for 2007 Tournament purposes. I have excluded from these rankings all teams that received automatic conference berths, meaning that these are the rankings in order of those teams eligible for at large selection. The bold face teams are those the Women's Soccer Committee selected for at large berths. The regular face teams are those the Committee passed over.

    To reiterate earlier information about the SoccerRatings and Massey systems, both incorporate an approach simply described as "If A beat B and B beat C, then it's logical to believe that A will beat C." That's an over-simplification but covers the basic concept. This, of course, is very different than the RPI's approach. In addition, SoccerRatings and Massey take into account home field advantage, which the RPI does not (except in the bonus/penalty award adjustments it makes after computing a basic RPI). In addition, Massey considers goal differential. SoccerRatings starts out with a "seeding" of teams based on past history, the weight of which diminishes over the course of a season but does not diminish down to zero; and Massey starts with a "seeding" but it is down to zero by the end of the season.

    I'd be interested to hear others' thoughts about what the rakings comparisons of these three systems show, if anything.

    One think you may note is that the RPI ranked Samford at #27, whereas SoccerRatings ranked Samford at #78 and Massey at #57. I thought Samford might provide a good example of the RPI's "pod" problem that I have discussed previously, so I took a look at the interconnection between Samford and the West Region during the 2007 season. (I picked the West Region because it is the farthest removed geographically from Samford and its normal playing "pod" and also because of the earlier discussion about the West Region most likely being discriminated against by the RPI during the 2007 season.) Samford itself played no games against the West Region, so West Region games had no impact on Samford's RPI Element 1. Samford's opponents played a total of 7 games against West Region teams out of a total of about 380 games they played altogether. So, West Region games had a 1.8% impact on Element 2 of Samford's RPI. That means that one would expect West Region games also to have a 1.8% impact on Element 3 of Samford's RPI. As discussed previously, Elements 2 and 3 have an effective weight, in computing the RPI, of 50%. So, overall, West Region games had a 1% impact on Samford's RPI. In other words, their impact was close to negligible. What this means, in effect, is that in real life one cannot use the RPI top compare Samford to West Region teams.

    I have to say, however, that this also is a problem for the SoccerRatings and Massey systems. They also have the "not enough cross-pod games" problem. (This problem, by the way, is worse for West Region teams than for teams from any other region, most likely due to obvious geographic factors.) It's why all the systems have large standard errors. It also creates a problem for the NCAA: With no head-to-head results and no results against common opponents to compare, what is the NCAA to do? The answer has been to pick a system (the RPI) and treat it as accurate even if it isn't. Although I might argue with the system picked, I myself have concluded that it's hard to find a better alternative approach (such as relying on polls).
     
  9. Craig P

    Craig P BigSoccer Supporter

    Mar 26, 1999
    Eastern MA
    Nat'l Team:
    United States
    The advantage that I presume the Jones and Massey rankings have relative to RPI in the face of relatively few intersectional games is, they take advantage of the information they do have by considering greater removes than simply the opponents and the opponents' opponents. (And furthermore, they do not assume, as RPI does, that all wins are created equal, regardless of who they may happen to be over.)
     
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    For those interested in Albyn Jones' SoccerRatings system as a results predictor, check the Pick Em thread at the following link for how well the Jones system identified the likely winners of some of Friday night's key games: https://www.bigsoccer.com/forum/showthread.php?t=771637. Jones' system fared very well.
     
  11. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    And for further interest, I've been looking at Albyn Jones predictions for the last couple of years - particularly with the predictions of Portland games, since that's the team I'm most interested in.

    AJ has correctly predicted the result for every UP tournament game in the last two years, since I started checking. It has also correctly predicted the outcome of every game UP has played against a ranked team last year and, so far, this year. That includes games against UCLA and USC both years.
     
  12. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I am attaching, as a pdf file, the Unadjusted RPI (i.e., not adjusted for bonuses for good wins/ties; penalties for bad losses/ties), covering games through Sunday, September 14. (Eight teams still are missing from the RPI report. This is because either a team, or a team it has played, or a team its opponents have played, has not played enough games to allow the RPI program to function. Teams also were missing from the RPI reports previously published on this thread, for similar reasons.)

    One of the things I've discovered about BigSoccer is that there's a limit on the cumulative size of attachments at least on a per thread basis. Because of this, I've found that each week in order to upload a new RPI report, I need to delete the previous week's report. (At least, I need to do this if I want to keep on the thread the two attachments to the thread's initial post, which I want to do.) What this means is that newcomers who want to track the change in the RPI over the course of the season will not be able to download the season's previous RPI reports. So, if you read an earlier thread that purports to have an attached report, but it doesn't and you want it, send me a private message with an email address and I will email the reports you want to you.

    Also, if anyone wants a copy of my compete Excel database and formulas, send me a private message and I'll email to you my Excel spreadsheet that contains them. It includes, if you know how to unravel it, the various formulas I use to compute the RPI. (I also have to execute some steps manually.) If you're an Excel expert, you can see if you can find a way to make make the system more automatic and also can track the NCAA's RPI reports and see if you can pinpoint the bonus/penalty amounts, which I am sure I do not yet have exactly correct.

    Now, a couple of words about the accuracy of the RPI. I have done a study to try to determine its accuracy. I've done this by looking at the standard error of Albyn Jones' SoccerRatings and assuming that the RPI would have at least the same standard error. (I say "at least" because SoccerRatings is based on more games data than the RPI and therefore actually most likely is more accurate.) In addition, I've done it through the alternate course of seeing how accurate the RPI is at predicting the outcomes of NCAA Tournament games (which is the only truly appropriate test since selecting and seeding teams for the Tournament is the only purpose of the RPI). Unfortunately, an NCAA Tournament only has 63 games, so that's not a very big database from which to try to determine predictive accuracy. Based on my study, I think I've demonstrated that the standard error for the RPI is between .02 and .03. (What this means, more or less, is that one can expect that the actual strength of a team will be within .02 to .03 RPI rating points of the RPI rating 68% -- or roughly 2/3 -- of the time. This seems like a pretty good standard to hold a rating system's predictive ability to, as if the system's prediction success gets much less, such as down to 50-50, one might as well toss a coin to decide how to rate teams.) If you look at the attachment, you'll see what this means in terms of how the RPI works. Simply put, from a statistical perspective, it isn't very accurate. If you want more detailed information, download and read the paper on the first post on this thread, which discusses this in detail. I should point out, however, that this standard error does not apply to any of the "in-season" RPI reports. It applies only to the RPI as of the completion of the regular season's games (including the conference championship tournaments), based on the number of games played as of then. All of the earlier reports, being based on less games, are less reliable, with the lack of reliability being greater as one goes earlier in the season.
     
  13. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Just for fun, I ran some calculations of results by region in inter-regional games. Here are the results for games through September 16. The number columns represent wins, losses, ties, total inter-regional games, # of teams in region, and # of inter-regional games per team:

    Central 63 58 12 133 57 2.33
    Great Lakes 74 82 8 164 62 2.65
    Mid Atlantic 55 68 19 142 48 2.96
    Northeast 47 54 12 113 41 2.76
    Southeast 81 73 17 171 61 2.80
    West 56 41 14 111 51 2.18

    Sorry for the chart format, but I can't figure out how to have it come out in true table form.

    There are no strength of schedule factors in this, so you have to look at the win/loss/tie numbers with that in mind.

    What I find most interesting is the # of inter-regional games per team column. Clearly, geographic factors play a big part in the # of inter-regional games that teams play. The Mid Atlantic region plays the most, having other regions on three sides and being in a densely populated area of the country. The West region plays the least, having only the Central region as a border and needing to travel major distances to get to other regions, with the Central region following closely behind the West region in terms of the least inter-regional games per team.

    From an RPI perspective, this provides some information as to why the RPI might have more trouble rating teams from the West and Central regions in relation to teams from the other regions than in rating teams from the Mid Atlantic, Northeast, Southeast, and Great Lakes regions in relation to each other.

    I'll try to remember to run these numbers again in two weeks, after most of the inter-regional games will have been completed. (In most cases, the only remaining inter-regional games after then will be those in conferences whose teams span two or more regions.)
     
  14. transplanted

    transplanted New Member

    Jul 27, 2008
    I am going to show my lack of understanding here, because I am missing something and I have the feeling it is going to be obvious. I'm not trying to dispute anything just trying to understand.
    Let me say that I understand the relative part of the chart to the arguement is that teams are playing fewer than 3 games a season out of region which blows the statistical relevancy on the RPI. I'm not convinced on the far reaching importance of this, but I understand that the last number(games out of region per team) is the most important part of your case. That isn't what I am struggling to understand.

    My question is this? How can all regions have winning records in their out of region games? I would think that if one team/region has a win out of region then another team/ region would have to have a loss, but I think if you add it all together it is 380 wins to 82 losses. My understanding is the wins and the losses should add up? If you add all the wins ties and losses of the out of region games it should look like x y x. What am I missing?

    Thanks
     
  15. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    CliveW transposed losses and ties in his chart headings. For instance the central region lost 58 games on this chart and tied 12, despite the headings.
     
  16. transplanted

    transplanted New Member

    Jul 27, 2008
    Make's sense. Still doesn't quite add up but only 4 lost losses now. I should have been able to figure that out on my own.
     
  17. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    I am an idiot. Sorry cpthomas.

    Here's the corrected table

    [​IMG]

    It shows the Central, Southeast and West regions with winning records , the rest with losing records out of region.

    I can't go back and edit the old post, but I could delete the image file it references....
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
  19. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    The purpose of this particular post is to describe the criteria the NCAA Division I Women's Soccer Committee is supposed to consider in selecting at large teams for the Tournament, as background information. I believe The Committee considers the same criteria for purposes of seeding. (Morris20, does this fit with your experience as to seeding?)

    The criteria are set out in the Handbook for the Championship that the NCAA publishes every year. I have been waiting to post this until the NCAA publishes its 2008 Handbook. Unfortunately, according to NCAA staff because of problems they are having with their NCAA.org website, they have not been able to post the 2008 Handbook on the website yet this year. It's been a matter of "any day now" for the last two weeks, so I'm going to go ahead and provide the information now, based on last year's Handbook. I do not believe there have been any changes for DI Women's Soccer (as distinguished from DI Men's Soccer, where there were some changes). If there are changes indicated in the new Handbook once it's published, I'll note that in a future post.

    Here's what the Handbook says:

    "The following criteria shall be employed by a governing sports committee in selecting participants for NCAA championships competition [Bylaw 31.3.3; Criteria for Selection of Participants]:

    "* Won-lost record;

    "* Strength of schedule; and

    "* Eligibility and availability of student athletes for NCAA championships.

    "In addition to Bylaw 31.3.3, the women's soccer committee has received approval from the Division I Championships/Competition Cabinet to consider the following criteria in the selection of at-large teams for the soccer championship (not necessarily in priority order):

    "Primary Criteria

    "* Adjusted Rating Percentage Index (RPI), which includes:

    " 1. Won-lost Record (25 percent)

    " 2. Strength of Schedule (50 percent)

    " 3. Opponent's strength of schedule (25 percent)

    " 4. Bonus/penalty system

    " (Committee use of the RPI is provided in Appendix I)

    "* Head-to-head competition.

    "* Results versus common opponents.

    "Secondary Criteria

    "If the evaluation of the primary criteria does not result in a decision, the secondary criteria will be reviewed. All the criteria listed will be evaluated.

    "* Results versus teams already selected to participate in the field (including automatic qualifiers with an RPI of 1-75)

    "* Late season performance -- defined as the last 8 games including conference tournaments (strength and results).

    "Recommendations are provided by regional advisory committees for consideration by the women's soccer committee. Coaches' polls and/or any other outside polls or rankings are not used as a selection criterion by the women's soccer committee for selection purposes."

    There are two other elements of interest listed in the portion of the Handbook that contains the above information:

    "To be considered during the at-large selection process, a team must have an overall won-lost record of .500 or better."

    "Regular-season games decided by the penalty-kick tiebreaker procedure shall be considered as ties for selection purposes."

    Appendix I to the Handbook contains "RPI Information." It states:

    "The RPI shall be used as a selection tool. When breaking down the RPI the committee can consider comparing data of individual teams, including but not limited to overall record, Division I record, overall RPI rank, nonconference record and RPI rank, regular season record and conference tournament results. Below is the Bonus/Penalty structure for the RPI:

    Result Location Opponents' RPI Rank Penalty Value Bonus/Scale

    Win Away 1-40 Bonus (Highest point
    value
    awarded)
    Win Neutral 1-40 Bonus
    Win Home 1-40 Bonus
    Win Away 41-80 Bonus
    Win Neutral 41-80 Bonus
    Win Home 41-80 Bonus
    Tie Away 1-40 Bonus
    Tie Neutral 1-40 Bonus
    Tie Home 1-40 Bonus
    Tie Away 41-80 Bonus
    Tie Neutral 41-80 Bonus
    Tie Home 41-80 Bonus (Lowest
    point value
    awarded)

    Tie Away 135-205 Penalty (Lowest
    penalty
    imposed)
    Tie Neutral 135-205 Penalty
    Tie Home 135-205 Penalty
    Tie Away 206-301 [sic] Penalty
    Tie Neutral 206-301 Penalty
    Tie Home 206-301 Penalty
    Loss Away 135-205 Penalty
    Loss Neutral 135-205 Penalty
    Loss Home 135-205 Penalty
    Loss Away 206-301 [sic] Penalty
    Loss Neutral 206-301 Penalty
    Loss Home 206-301 Penalty (Greatest
    penalty
    imposed)"

    If this table did not come out well between my typing it into the template and its conversion into a post, sorry about that. Hopefully you can make sense of it.

    One note about the table: The penalties for ties/losses to teams 206-301, described in the 2007 Handbook, probably are for a year earlier than 2007 since in 2007 there were 314 teams and those penalties presumably applied in relation to teams 206-314 in 2007 and also presumably will apply in relation to teams 206-320 in 2008.

    According to Morris20, who has direct experience with it, we can expect that the Committee will hold rigorously to these criteria. That fits with my analysis of what the Committee did last year.

    Also, Morris20, I am not sure what the following phrase from the Appendix I text means: "nonconference record and RPI rank." Can you give any help with what "and RPI rank" means in that context?
     
  20. Morris20

    Morris20 Member

    Jul 4, 2000
    Upper 90 of nowhere
    Club:
    Washington Freedom
    ok. First of all, CPT, you're spending too much time on this :) (and with the caveat that I've never been on a D1 championship committee)

    RPI is the raw number "University College has an RPI of 72" RPI rank is where that puts them from 1-320 as in "Univ. College RPI of 72 puts them #32 in the country"

    Basically, you've got a bunch of people who are supposed to bring home the bacon for their respective institutions/conferences. As you marshal your argument to get your guy in vs. some schmuck from another region/conference/school; you're restricted to arguing over the selection criteria as they are written (this avoids more radical methods for defending a selection, like name calling, or, at the national level, knife fighting).

    Once the regional committee has created its top 10-20 (in region), ranked in order, the national committee has to figure out where they fit in nationally (with a bias towards NOT changing the order presented by the regional committee within a given region).

    So, given the way the selection criteria are (were?) written, you've got to argue based on three things: #1 School A has a better RPI than School B, #2 if that's not true School B should be ahead of School A because they beat them on the field this year (head-to-head), or they did better against common opponents (which they don't have if they're not in the same region, unfortunately). That's it. If you can't make a case there, School A is ahead of School B and you go to the next spot in the rankings . . .

    If you can argure that their RPI's are basically even and you can't separate the teams with the other "primary" criteria, THEN you can argue based on secondary criteria - RPI vs. teams already in (including AQ's ranked better than #75, so NOT Oakland) and how great they've been over the last 8 games.

    Given these criteria, you should be able to see how a team's RPI is going to pretty much seed them for selection and the tournament.

    Editorial comment: The current criteria seem to have been developed by the committee people, since they make the committee's job VERY cut and dried (easy? at least less controversial), if coaches don't like this they can vote to change it. Right now, the committee really can't even have a discussion unless the RPI's are within some kind of standard of error and then they probably have to go with them anyway (unless the teams played each other or several common opponents - which will be a wash or they wouldn't be close on RPI), so you don't get a lot of arguing.
     
  21. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    National teams seem to be playing about the same number of games, or perhaps a few more, than colleges during Olympic and WC years.

    The US women will play 26 games or so before the year is out. (they played 16 games to the Olympics, 6? during them, will play Ireland three times. and perhaps another game or two.

    I think the issue with National teams is different.

    1)Not all games have the same importance, and sometimes winning isn't as important to a team as seeing players.

    2) although they all have the name "National team", rosters are different.

    3) teams often play each other several times over the course of a year, greatly reducing sample size and game distribution. the problem of rating over different regions is even worse than for colleges.

    for example, The US played Brasil at least three times, Mexico, I think three times, Canada three times, Norway three times, and will have played Ireland three times. Only a small portion of those games were by full strength teams on both sides, and left a very small sample size for the rest of the world.
     
  22. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    Well, I feel somewhat flattered to see one of my posts quoted from months ago (January to be exact), but what's up? What brought this particular point up right now? DId I miss something in the last couple weeks here?

    I was responding to a very specific criticism of the Albyn Jones ratings:
    I'm a strong believer in the Albyn Jones ratings, at least for Division 1 College Women's Soccer, and I'm happy to try to defend it with my admittedly limited mathematical and statistical acumen. Looking back at my reply, I have to say I found it to be one of my more succinct and sensible contributions to BS. You can click on that blue arrow-thingie in the quote box to link back to it.

     
  23. Cliveworshipper

    Cliveworshipper Member+

    Dec 3, 2006
    Your post was so good I thought it was worth bringing back up.
     
  24. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Morris20, thanks very much for your great input. Your description of how the selection/seeding process works, even if not based on experience with DI but rather with D3, is exactly how I had deduced the DI selection/seeding process works based on comparing last year's RPI to the decisions the DI Women's Soccer Committee made for seeding and at large selections last year.

    My idea for this thread is that once the NCAA starts releasing its RPI rankings, we can track along with the NCAA regional advisory committee/Women's Soccer Committee process to see how their decision-making most likely is evolving. And then, the last Sunday of the season, just before the Committee announces the bracket, we can try to predict in advance what the at large selections and seeds will be. I'll bet we can come extremely close.

    Along that line, another question to which you might know the answer. The DI Committee seeds four #1s, four #2s, etc. But it appears to me that for bracket placement purposes they rank the seeds as 1 through 16 and place them in the bracket accordingly. That way, the #1 ranked team plays the #16 ranked team in the Elite Eight, the #2 plays the #15, etc. Do you know if I'm right on this?

    PS - My wife agrees with you that I'm spending entirely too much time on this.
     

Share This Page