Alternative Rating Systems

Discussion in 'Women's College' started by cpthomas, Nov 25, 2014.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    My results are based only on games that are home/away, so neutral game results play no part.
     
  2. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Sleeping in their own beds also can be true for conference opponents, e.g., San Diego, Loyola Marymount, and Pepperdine; and San Francisco, St. Mary's, and Santa Clara. But, you may be right that in the big picture, teams on average play "away" non-conference games closer to home than they do for "away" conference games. This could be an economic matter. I don't know for a fact that this is true, figuring that out would be too much of a task, but it's a possible explanation.
     
  3. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    That is something I've thought might be a cause. On the other hand, for conference games, visiting teams are familiar with their opponents' fields, playing there every other year, so one would think home field would be less of an advantage.

    I'm wondering whether there is a psychological factor involved, whether teams psychologically approach conference games differently than non-conference games and whether something about that causes home field to be more advantageous in conference games. For example, do teams feel it's more important to win home conference games than it is to win home non-conference games? Maybe home conference games are a matter of "defending my home" whereas home non-conference games are a matter of getting ready for the conference season? If this is the case, it seems like there might be something in the numbers for coaches to learn about how to better approach non-conference games -- treat them more as "life and death."
     
  4. Tom81

    Tom81 Member+

    Jan 25, 2008
    THis is my decidedly unexpert psychological opinion.
    IOW, you get up for conference games moreso than nonconference games.
     
  5. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I've just completed a big project updating all my data bases, organizing spreadsheets, doing new calculations, updating work related to Massey's and Jones' rating systems, and doing major re-writes of most of the RPI pages at the RPI for Division I Women's Soccer website. I previewed some of that work on this thread earlier this year. While in the future I'll be adding more data into the system each year, and while I'll soon do some reprogramming of my correlation and performance percentage systems for evaluating rating systems that I now can do because of the volume of data I have, I'm pretty confident that I now already have enough data in the system for my analyses to just about as good as I'm going to be able to get them. And, I have a high degree of confidence in them and in the conclusions they lead to.

    For those who are really interested in the RPI and its inner workings and in how different versions of the RPI compare to each other, to the two alternative RPI-based systems I've developed, and to the Massey and Jones systems, I urge you to review (or re-review) the following pages at the RPI website, all of which I have substantially re-written based on the work I just completed:

    RPI: Non-Conference RPI

    RPI: Regional Issues

    RPI: Element 2 Issues

    RPI: Strength of Schedule Problem

    RPI: Modified RPI? (This page pulls together a lot of information from the other pages, includes analyses of the RPI-based systems I've developed and of Massey and Jones, and then does a comparison of all the systems based on eight factors.)
    Here's a link to the RPI website's home page, from which you can navigate to the pages mentioned above: RPI for Division I Women's Soccer.

    To whet your appetites, here's part of the Summary and Conclusion section of the "RPI: Modified RPI?" page:

    I believe the Report Card [that is in a table included on the webpage] provides a good picture of where the systems stand in relation to each other: Massey is at the top of the class by a good margin, then comes Jones ahead of the others by a good margin. Following those two, the Improved RPI and Iteration 5 [my systems] are quite close and are better than any of the NCAA versions by a fair margin. Of the NCAA versions, the 2010 RPI and the 2009 RPI are next, quite close to each other. The two poorest are the 2012 RPI, which is the NCAA's current system, and the Unadjusted RPI.

    In the Report Card, the differences are accounted for fundamentally by how fair the systems are in rating conferences and regional playing pools in a single system, both in terms of general fairness and in terms of discrimination or lack of discrimination in relation to conference and regional playing pool strength. What the report card shows is that the Women's Soccer Committee's decisions have made the NCAA's current RPI formula the most unfair of any of the systems, from a conference and regional playing pool perspective, save for the Unadjusted RPI.

    Regarding Massey, it is critical to note that all of the above analysis is of how well the rating systems measure teams' performance during this year. Although both Massey and Jones use seedings of teams based on prior years, the above analysis demonstrates that this allows them to measure performance this year better than the other systems: in terms of overall accuracy and accuracy by rank, all the systems perform essentially equally; and in terms of general fairness and lack of discrimination in relation to conference and regional playing pool strength, they perform better than the other systems. Further, the systems I've developed, though not as good as Massey or Jones, are significantly better than any of the NCAA's systems.
    This work should make my friend kolabear happy, is it proves -- I believe quite authoritatively -- the correctness of what he has been saying for a number of years about the superiority of systems such as Massey's over the RPI.
     
    kolabear repped this.
  6. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here's an interesting fact I've identified in my looks at different rating systems:

    Approximately 15% of all games involve opponents whose ratings are close enough that home field advantage reasonably could be the factor that determines the game outcome. In the other approximately 85% of games, the teams' ratings are far enough apart that game location isn't going to be the determining factor. It might contribute to an upset, but there would have to be an element of true upset too, above and beyond any game location effect.

    This is true for all the rating systems I've looked at.
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    In case there are any other rating nuts like me who are interested, I've just completed some programming refinements for the "correlator" and "performance percentage" system I developed to measure how well different rating systems do at providing ratings that accurately reflect how teams actually performed over the course of the season. I've reported results using the previous version of the system earlier on this thread.

    I was able to do the refinements because of the increase in the data available since I first developed the system a couple of years ago. The refinements make the numbers the system produces more reliable and also allow exact apples to apples comparisons of different RPI versions' ratings to Massey's and Jones' ratings (subject only to the effects of my having fewer years of Massey and Jones ratings than I do of RPI ratings).

    As a follow up to the re-programming process, I also have done significant re-writes of a bunch of the "RPI: ...." pages at the RPI for Division I Women's Soccer website. Those pages now provide a comprehensive and very detailed (and very dry and, probably too many, boring) description of the RPI, it's inner workings, how the "correlator" and "performance percentage" systems work, and how all of the NCAA's versions of the RPI, the modified RPI versions I've created, and Massey and Jones perform both in terms of general accuracy and in terms of rating the conferences' and regions' teams in a single system.

    As part of comparing the different rating systems, I developed a purely objective grading program (with a 61% to 100% scale) for the systems in relation to general accuracy and rating the conferences and regions in a single system. The following table shows the final grades the different systems receive when general accuracy is weighted at 50% and success at rating the conferences and regions in a single system is weighted at 50%, taking all games into consideration:

    [​IMG]

    I'm confident that the results my efforts have produced are reliable. Over time, as I add more years' data into the system, the results will become even more reliable, especially for Massey as I gain more years of his ratings, but it's extremely unlikely that there will any significant changes from the current results.
     
    kolabear and Soccerhunter repped this.
  8. Soccerhunter

    Soccerhunter Member+

    Sep 12, 2009
    Impressive, CP!
     
    kolabear repped this.
  9. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'm in the process of producing Elo-based ratings for the 2007 through 2014 seasons and have completed my first set using the most simple Elo formula. I'm not trying to produce something that meets the NCAA RPI staff's criteria for an "acceptable" formula, I'm just trying to see how well Elo-based ratings correlate with how teams have performed over the course of the season and, particularly, to see if they correlate better than the NCAA's RPI variations. And, in particular, I'm trying to see if they do better at rating conferences' and regions' teams in a single national system, i.e., are "fairer" and avoid systematic discrimination based on conference strength. (Observation to date: yes, they do.)

    Eventually, I'll report on my results.

    But, one issue that came up on this thread a while ago related to the value of home field in Elo based systems. Albyn Jones suggested that 100 points was the value of home field in an Elo based system. I had suggested that in Jones' system, the actual value was 68 points (34 x 2). Using the first set of ratings I've just produced -- eight seasons' worth of -- i.e. ~24,000 -- games, I can say pretty confidently that in an Elo based system, the value of home field is 66 points.

    So, kolabear, if I ultimately can give you good Elo ratings next year (I've set up my program to produce them daily), keep in mind that HFA is 66 points, not 100.
     
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #35 cpthomas, Mar 16, 2015
    Last edited: Mar 16, 2015
    Here is my fact (not factoid, by either definition) of the day: During the 2007 Division I women's soccer season (not counting the NCAA tournament), 20% of all games went to overtime. Eventually, after a lot of very tedious work, I'll have an average per season, but I don't see why any season would be particularly different than another.

    So, here's a question: In a rating system, should an overtime win/loss count the same as a regular time win/loss? If the rating system treats shootouts as ties? On average, is an overtime win/loss more like a regular time win loss or is it more like a tie? Remember, this question is only in relation to rating systems.
     
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Related to the previous question, in 2007 the average goal differential in games that did not go to overtime was 2.24 goals.
     
  12. Hooked003

    Hooked003 Member

    Jan 28, 2014
    IMO, (1) in a playing system with limited substitutions, a win or loss in OT tells us nothing more of value than a regular time win or loss and (2) in a playing system with unlimited substitutions, a win or loss in OT tells us something more of value about the teams (specifically, depth or lack thereof) than a regular time win or loss. I haven't a clue as to how to quantify that value, though, or how to test the hypothesis.
     
  13. kolabear

    kolabear Member+

    Nov 10, 2006
    los angeles
    Nat'l Team:
    United States
    #38 kolabear, Mar 17, 2015
    Last edited: Mar 17, 2015
    Interesting that Prof. Jones said that. If I recall, the homefield advantage he used in his published ratings varied between 50 and 60 rating points, where, in the scale he used, 6o points corresponded to just about a 60% win probability. (And where 100 points would equal about a 67% expected win probability)

    Meanwhile FIFA uses a 100 point home advantage BUT they use a different scale, one that I've noticed others call the Elo scale. In the Elo scale 100 points corresponds to a 64% expected win probability (well, roughly speaking. Because FIFA uses values based on game scores it's technically a bit more complicated than that.)

    It looks to me like the 66 point homefield advantage fits closely with what both Albyn Jones and FIFA have also observed. If you're using the scale Prof. Jones used, that 66 points corresponds to just over a 61% expected win probability. FIFA's 100 point homefield advantage using the Elo scale corresponds to a 64% advantage. Close - and since FIFA games are international games (more travel, playing in foreign countries), I can readily see that homefield advantage may be slightly greater for the games FIFA is rating as opposed to collegiate games.


    ***
    just for reference, here's a table comparing the Albyn Jones scale and the Elo scale


    rating differential
    win prob. Albyn Jones scale win prob. Elo scale
    0 .500 .500
    100 .667 .640
    200 .800 .760
    300 .889 .849
    400 .941 .909
    500 .970 .947
     
  14. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    kolabear, if I understood what I've read, the systems can incorporate -- or not incorporate -- scaling factors to put their numbers on the scale Jones used. And, I'm sure that without using scaling factors, how the win probabilities work in relation to rating differences depends on the details of the formula. For example, the FIFA formula not only uses goal differential but also weights games differently depending on the type of game -- e.g., World Cup games have more weight than frendlies.

    With the basic Elo formula I'm using, with a K factor of 40 and a starting point of last year's end of season ratings (starting out in 2007 with Jones' end of season ratings), and with a new team receiving a provisional starting rating of 1345 (the median), I get approximately the following win probabilities after adjusting ratings for home field advantage:

    100 point difference: 56%

    200: 72%

    300: 82%

    400: 90%

    500: 92%
     
  15. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    You're probably right. It might be possible to test your hypothesis by assuming that teams' depth is well represented by their ratings. One then could look at teams' overtime results in relation to their ratings to see if there's a pattern that highly rated teams have better overtime results. A problem with this, however, would be that it ignores fitness as a factor. Some highly rated teams have relatively small rosters and use limited substitutions, being able to do this partly because of very high fitness levels.

    My question, however, has to do with whether an overtime win/loss is more like an average regular time win/loss or more like like an average regular time tie. That is something I can test for, once I've entered all the overtime games data into my system. That is an exceedingly boring and time-consuming task, but it's "in process." I think it's an important question since all the mathematical rating systems I know of treat overtime win/loss results the same as your average regular time win/loss results. I'm wondering whether they should be doing that.
     
  16. Hooked003

    Hooked003 Member

    Jan 28, 2014
    My guess is that fitness and depth have inverse relationships with respect to predictive value. My assumption would be that fitness is a factor that might matter greatly early in a season but becomes less important toward the end of the season when everybody has had a chance to get sufficiently fit. In contrast, depth would seem to become more important as the season progresses, as injuries mount.

    I could imagine treating every game as a 90-minute event and just ignoring OT results. Yet, that's not how the game is played. OT wins and losses are not ties. If one were to ignore OT, then why not ignore the second half? Why not ignore the minutes from 80-90? For predictive purposes, perhaps one can ignore certain segments of the match. It could be that what happens in those segments is truly random or it could be that those segments add nothing of value when predicting. My guess would be that every segment contributes something of value to making predictions, but that's just me guessing.
     
  17. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    True but .... In tournaments, games go to PKs, but most developers of rating systems treat games that go to PKs as ties. So, the "that's how the game is played" position already doesn't apply there. And, the "golden goal" element of college OT games adds a peculiarity, as compared to the old system of two 15 minute overtimes played to completion.

    My interest is in which rating system produces ratings that best correlate with the results of the games from which the ratings were derived. (In other words, I'm interested in ratings as retrodictive, which is how the NCAA uses them, rather than predictive.) So, from a retrodictive perspective the question is, if a system treats OT games as wins/losses, how do its end-of-season ratings correlate with the results of the past season's games; and, if a system treats OT games as ties, how do its ratings correlate. For a ratings nerd like me, it's an intriguing question to which I don't know the answer although the generally accepted answer so far has been that treating them as wins/losses is better. But hopefully, once I've entered all the OT data into my system, I'll be able to come up with answer -- the correlations are better treating OT games as wins/losses, they're better treating the games as ties, or it doesn't matter in the slightest.

    My gut feeling is that treating OT games as ties will produce ratings that correlate slightly better with game results, but when it comes to rating systems I'm not a believer in gut feelings -- except as a motivator to do the data work and then see how the correlations work when I change OT games from wins/losses to ties.
     
  18. Hooked003

    Hooked003 Member

    Jan 28, 2014
    I look forward to reading the answer!
     
  19. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here are some thoughts about the difference between the NCAA’s approach to rating teams using the RPI and Elo-like systems’ approaches to rating teams:


    All the creators I know of, of systems for rating competitors’ performance, agree on one thing: the two basic building blocks for the systems must be (1) the strengths of the opponents you played and (2) how you did against them. There also can be other factors, such as the significance of the level of competition – FIFA’s women’s national team ratings weight the value of results differently depending on the “stature” of the competition, with World Cup results being the most highly weighted. But even for FIFA, the basic building blocks are strengths of opponents and how you did against them.


    How you did against an opponent is relatively easy to measure. Fundamentally, you won, you lost, or you tied (for sports in which a tie is a potential outcome). There can be refinements, depending on the sport. Where did you play the game – at home, away, or at a neutral site? What was the score differential? Did the game go to overtime? Did it go to an additional overtime (such as a shootout in soccer)? And so on.


    The strengths of the opponents you played are more difficult to measure. This is where the essential difference between the RPI and Elo-type systems occurs.


    In an Elo-type system, each competitor starts with a rating. For a sport that has no season beginning and end, but where the competition rather is continuous without end – such as chess – the ratings of competitors at any time simply are the ratings as of the last rating computation date. This also is true for FIFA women’s national team soccer ratings. When applied to a seasonal sport such as NCAA soccer, this means that at the beginning of each season, a team’s starting rating ordinarily is the rating it had at the end of the prior season (the last rating computation date). For a competitor that is new, the competitor receives a “provisional” rating -- for example, the median rating of all teams.


    It is possible, however, to run an Elo-type system for a seasonal sport without relying on “end of prior season” ratings. For example, all teams could start a new season with a common provisional rating such as the median rating over the preceding three or four years. The “experts” on Elo-type systems, however, assert that using a common provisional rating produces rating results that do not best reflect teams’ “true” performance, as compared to using provisional ratings based on prior performance.


    In an Elo-type system, when two teams play each other, the system identifies the rating difference between the two teams. The way the system is designed, the size of the rating difference correlates with an expected result probability: the greater the rating difference the higher the probability that the higher rated team will win the game. At specific times (for my purposes here, I’m going to assume after each game), the system adjusts the opponents’ ratings based on a formula whose variables are the two teams’ ratings and the game result. The winning team’s rating increases and the losing team’s rating decreases in like amounts, or the tieing teams’ ratings increase and decrease in like amounts, with the amounts depending on the teams’ ratings, the result, and the probability of that result. So, for example, if a team is much more highly rated than the other, a win will produce only a very slight increase in its rating and decrease in its opponent’s rating, whereas if a team is rated much more poorly than the other but wins, its win will produce a large increase in its rating and a large decrease in its opponent’s rating. What this means is that with an Elo-type system, each rating adjustment is tailored to both the result and the strength of the team’s particular opponent.


    One way to look at an Elo-type system that treats competition as continuous, rather than starting anew at the beginning of each season, is that a team goes into the season with a provisional rating based on past history. The system then, after the first game of the current season, adjusts the provisional rating to bring it closer to the team’s true current performance by applying its formula to the team and its first opponent, based on the two teams’ provisional ratings. After the second game, the system makes a further adjustment. By doing this over and over through the course of the season, the system over time brings teams’ ratings close to their true performance during the current season.


    The NCAA’s RPI takes a very different approach. It starts the beginning of the season with the assumption that all teams are of equal strength – essentially, each team starts with an 0.5000 rating. It then adjusts a team’s rating after the first game based on whether the team won or lost the game. Due to a quirk of the RPI (in determining the strength of Team A’s opponent, it disregards that opponent’s record against Team A itself), after the first game a team’s rating is either 1.0000 (it won the game), 0.0000 (it lost the game), or 0.5000 (it tied the game). For each subsequent game, the rating then is further adjusted, this time based on the team’s cumulative record to date, its opponents’ cumulative records to date (subject to the quirk), and its opponents’ opponents’ cumulative records to date.


    Using the basic RPI system, it does not matter what a team’s result was against a particular opponent. The team could have beaten the #1 ranked team and lost to the #332 ranked team (a highly improbably sequence of events), or lost to the #1 ranked team and beaten the #332 ranked team (a highly probably sequence of events), its basic RPI rating will be the same. (The Adjusted RPI, however, does reward teams with good results against high-ranked opponents and penalize teams with poor results against low-ranked opponents. The reward and penalty amounts, however, are relatively small.)


    One could say about the RPI that a team goes into the season with a provisional rating. The provisional rating is that it is equal to every other team. The RPI then, at the end of the season, adjusts this provisional “equal” rating based on teams’ cumulative records, their opponents’ cumulative records, and their opponents’ opponents’ cumulative records, to produce final end-of-season ratings.


    If you look at the two systems this way, a very important question is: Which provisional ratings are a better starting point from which to adjust to find teams’ ratings that reflect teams’ true performance? The Elo-type provisional ratings which are teams’ ratings at the end of the previous year? Or the RPI-type provisional ratings which are essentially 0.5000 for each team?


    Or, to use the upcoming 2015 season for illustration, which provisional ratings are likely to be better as a starting point from which to adjust teams’ ratings over the course of the season in order to get their “true” end of season ratings? Elo-type ratings (using the formula I’m currently working with) that would start the season with provisional ratings having teams at the top ranked as follows: (1) Florida State, (2) UCLA, (3) Stanford, (4) Virginia; and teams ranked at the bottom as follows: (329) Southern U, (330) Arkansas Pine Bluff, (331) Grambling, (332) Alcorn State? Or RPI-type ratings that would start the season with all teams, including those just mentioned, ranked the same at 166.5 (the mid-point for 332 teams)?


    To me, from a theoretical perspective the answer seems pretty clear. It is better to start with provisional ratings that are closer to teams’ true ratings going into the season. If that’s the case, then which provisional ratings are more likely to be closer to teams’ true ratings – using 2015 as an example, ratings that rate Florida State, UCLA, Stanford, and Virginia as stronger than Southern U, Arkansas Pine Bluff, Grambling, and Alcorn state; or ratings that rate those teams as of equal strength? Obviously, the provisional ratings that rate Florida State, UCLA, Stanford, and Virginia as stronger are more likely to be closer to teams’ true ratings. And, because of that, they are better to start with as provisional ratings from which to adjust in order to arrive at final end of season ratings.


    Ultimately, of course, the answer is not in what makes sense from a theoretical perspective, but rather is in objective tests that measure how well Elo-type ratings correlate with the results of the season’s games as compared to how well RPI ratings correlate with those results. That answer is yet to come, but in a couple more weeks I will have it.


    But, I think the above discussion does reveal faulty thinking in the NCAA’s position on why it uses the RPI. The NCAA says it will not use a system that starts with teams given provisional ratings based on past history because such a system is not “fair.” What is faulty about this is that the NCAA itself starts teams with provisional ratings, though ones that disregard history – all teams have the same provisional rating. That also is not “fair.” Thus the question isn’t whether the RPI or another system, such as an Elo-type system, is “fair.” Rather, the question is which system’s starting provisional ratings are the “least unfair.” I believe the above discussion shows that from a theoretical perspective, an Elo-type system’s starting provisional ratings, although they are not completely “fair,” are more fair than the RPI’s starting provisional ratings.


    I think this is pretty obvious, and I think at least the NCAA’s RPI staff is well aware of it. If I’m right, then why does the NCAA’s insist on the RPI and oppose using a system that starts with provisional ratings based on past history? On this question, I start with the assumption that women’s soccer has little significance in the NCAA’s thinking about the RPI. The NCAA developed the RPI for men’s basketball, that’s where the money is for the NCAA, and it’s the relationship between the RPI and March Madness that the NCAA cares about. One of the characteristics of the RPI, due to the details of its structure, is that it tends to underrate teams from stronger conferences and overrate teams from weaker conferences. And a specific effect of this tendency is that there typically are a couple of mid-majors whose top team is significantly overrated and thus in the pool to get an at large selection. Although the NCAA’s RPI staff obfuscates when asked to address this issue, I’m pretty sure they’re well aware of it. So, why might the NCAA staff think the RPI’s bias is a good thing? March Madness has its regular repeat teams who generate a lot of attention, but part of the excitement – especially to those who aren’t affiliated with one of the regular repeat teams or conferences -- is about potential “Cinderella” mid-major teams. By using the RPI with its built-in bias, the NCAA enhances the likelihood of one or two potential Cinderellas at least making it into the tournament.


    Unlike the RPI, on the other hand, the work I’ve done so far (results not yet published) indicates that an Elo type system – at least the one I’m testing -- does not discriminate among conferences in relation to conference strength. And, more specifically, it does not tend to significantly overrate top teams from mid-majors. Thus it does not enhance the likelihood of one or two potential Cinderellas making it into the tournament. That, I suspect, is at least one of the real reasons the NCAA has opposed dropping the RPI and going to an Elo-type system.


    Again, however, the proof will be in the correlations. On that, I’ll need a couple more weeks before I can do a “real,” non-theoretical comparison between the RPI and an Elo-type system.
     
    kolabear repped this.
  20. Hooked003

    Hooked003 Member

    Jan 28, 2014
    I don't doubt that March Madness money matters a great deal to the NCAA (given that it keeps 40% of the men's basketball revenue for itself) and that Cinderella teams drive rating higher (which drives up TV-based revenue over time), but how much of the bias you've identified actually matters to the March Madness event? Because every conference gets an automatic bid, there are lots of potential Cinderella teams every year. So, the only additional Cinderella advantage to the bias would be for teams that can go far in the tournament that fail to win their conference tournament. Are there really that many of those teams?
     
    kolabear repped this.
  21. Hooked003

    Hooked003 Member

    Jan 28, 2014
    If one is going to examine discrimination, with respect to the distribution of NCAA men's basketball revenue, it is probably worthwhile to consider that the money is not paid to individual teams. The money is paid to the conferences. Every conference gets at least 1 team into the event and, so, every conference gets at least x dollars. The only discrimination that would matter -- when looking at the money -- would be whether the RPI discriminates at the conference pay-out level. Although the conferences are free to distribute the money as they want, it appears that the larger conferences distribute the money equally to each school (so the school that didn't go to the dance receives as much money as its conference-mate that went to the final) and the smaller conferences keep the money to fund their conferences' offices. (The NCAA pays for travel and lodging for each team that participates.)
     
    kolabear repped this.
  22. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    No, there aren't many of those teams, but for Division I women's soccer there are one or two or maybe three each year that get into the bubble due to the RPI's inherent bias, so I'm guessing the same is true for DI men's basketball. And, for basketball, even if those tams don't actually get at large selections, it adds to the pizzazz of the bracketology lead up, which is part of the PR buildup that produces $$$.

    But, I could be completely wrong. I'm not inside the NCAA staff, so I don't know. I'm simply looking for a logical reason why the NCAA staff would be so defensive of the RPI. I could say they're simply dumb and don't understand the RPI's inherent bias and other problems, but I don't think that's true. I'm pretty confident they're well aware of the RPI's problems. I could say they're simply stubborn, but I doubt that's a complete explanation. My suspicion is that they have a reason that is logical to them for holding so strongly to the RPI, so I'm trying to figure out what reason would seem logical to them. If anyone has an idea of a better reason, please let us know.
     
    kolabear repped this.
  23. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    To relieve my boredom as I assemble data on which games were overtime games, I went back and looked at the percentages of games that went to overtime in 2009 through 2012 (that's as far as I've gotten in assembling the data), to add to what I'd already found for 2007 and 2008.

    So, the fact for the day is that the percentages of overtime games are remarkably consistent from season to season:

    2007 21%
    2008 21%
    2009 20% (just missed 21% in the rounding off process)
    2010 21%
    2011 21%
    2012 21%
     
  24. Hooked003

    Hooked003 Member

    Jan 28, 2014
    That's pretty amazing. The coach can control the Out-Of-Conference (OOC) opponents and location, but not the in-conference (IC) opponents and location, so I wonder if there's any difference between OOC and IC when it comes to going to OT?
     
  25. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Good question. I'll see if I can come up with an answer next time I need a breather in entering the tie games data.
     

Share This Page